pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification
|
transformers
|
learning rate: 3e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
The model is pretrained on MNLI (we use kangnichaluo/mnli-2 directly) and then finetuned on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-cb
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 3e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
The model is pretrained on MNLI (we use kangnichaluo/mnli-2 directly) and then finetuned on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
## GlossBERT
A BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: '[GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge](https://arxiv.org/pdf/1908.07245.pdf)'
Disclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: https://github.com/HSLCY/GlossBERT
## Usage
The following code loads GlossBERT:
```py
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('kanishka/GlossBERT')
model = BertForSequenceClassification.from_pretrained('kanishka/GlossBERT')
```
## Citation
If you use this model in any of your projects, please cite the original authors using the following bibtex:
```
@inproceedings{huang-etal-2019-glossbert,
title = "{G}loss{BERT}: {BERT} for Word Sense Disambiguation with Gloss Knowledge",
author = "Huang, Luyao and
Sun, Chi and
Qiu, Xipeng and
Huang, Xuanjing",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1355",
doi = "10.18653/v1/D19-1355",
pages = "3507--3512"
}
```
|
{"language": "en", "license": "mit", "tags": ["glossbert"], "datasets": ["SemCor3.0"]}
|
kanishka/GlossBERT
| null |
[
"transformers",
"pytorch",
"bert",
"glossbert",
"en",
"dataset:SemCor3.0",
"arxiv:1908.07245",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.07245"
] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #glossbert #en #dataset-SemCor3.0 #arxiv-1908.07245 #license-mit #endpoints_compatible #has_space #region-us
|
## GlossBERT
A BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: 'GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge'
Disclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: URL
## Usage
The following code loads GlossBERT:
If you use this model in any of your projects, please cite the original authors using the following bibtex:
|
[
"## GlossBERT\n\nA BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: 'GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge'\n\nDisclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: URL",
"## Usage\n\nThe following code loads GlossBERT:\n\n\n\nIf you use this model in any of your projects, please cite the original authors using the following bibtex:"
] |
[
"TAGS\n#transformers #pytorch #bert #glossbert #en #dataset-SemCor3.0 #arxiv-1908.07245 #license-mit #endpoints_compatible #has_space #region-us \n",
"## GlossBERT\n\nA BERT-based model fine-tuned on SemCor 3.0 to perform word-sense-disambiguation by leveraging gloss information. This model is the research output of the paper titled: 'GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge'\n\nDisclaimer: This model was built and trained by a group of researchers different than the repository's author. The original model code can be found on github: URL",
"## Usage\n\nThe following code loads GlossBERT:\n\n\n\nIf you use this model in any of your projects, please cite the original authors using the following bibtex:"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-CoLA-finetuned-cola
This model is a fine-tuned version of [textattack/bert-base-uncased-CoLA](https://huggingface.co/textattack/bert-base-uncased-CoLA) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8318
- Matthews Correlation: 0.5755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2949 | 1.0 | 535 | 0.5742 | 0.5219 |
| 0.1852 | 2.0 | 1070 | 0.7226 | 0.5573 |
| 0.1196 | 3.0 | 1605 | 0.8318 | 0.5755 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-CoLA-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5755298089385917, "name": "Matthews Correlation"}]}]}]}
|
kapilchauhan/bert-base-uncased-CoLA-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-CoLA-finetuned-cola
=====================================
This model is a fine-tuned version of textattack/bert-base-uncased-CoLA on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8318
* Matthews Correlation: 0.5755
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-CoLA-finetuned-cola
This model is a fine-tuned version of [textattack/distilbert-base-uncased-CoLA](https://huggingface.co/textattack/distilbert-base-uncased-CoLA) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Matthews Correlation: 0.5689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.6061 | 0.5074 |
| No log | 2.0 | 268 | 0.5808 | 0.5652 |
| No log | 3.0 | 402 | 0.6996 | 0.5689 |
| 0.0952 | 4.0 | 536 | 0.8249 | 0.5385 |
| 0.0952 | 5.0 | 670 | 0.8714 | 0.5567 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-CoLA-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5689051637185746, "name": "Matthews Correlation"}]}]}]}
|
kapilchauhan/distilbert-base-uncased-CoLA-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-CoLA-finetuned-cola
===========================================
This model is a fine-tuned version of textattack/distilbert-base-uncased-CoLA on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6996
* Matthews Correlation: 0.5689
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7696
- Matthews Correlation: 0.5136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5284 | 1.0 | 535 | 0.4948 | 0.4093 |
| 0.3529 | 2.0 | 1070 | 0.5135 | 0.4942 |
| 0.2417 | 3.0 | 1605 | 0.6303 | 0.5083 |
| 0.1818 | 4.0 | 2140 | 0.7696 | 0.5136 |
| 0.1302 | 5.0 | 2675 | 0.8774 | 0.5123 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5135743708561838, "name": "Matthews Correlation"}]}]}]}
|
kapilchauhan/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7696
* Matthews Correlation: 0.5136
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7805
- Wer: 0.4340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.36 | 400 | 1.9130 | 0.9244 |
| 5.0013 | 2.71 | 800 | 0.7789 | 0.5944 |
| 0.6544 | 4.07 | 1200 | 0.7298 | 0.5852 |
| 0.4021 | 5.42 | 1600 | 0.6978 | 0.5667 |
| 0.3003 | 6.78 | 2000 | 0.6764 | 0.5382 |
| 0.3003 | 8.14 | 2400 | 0.7249 | 0.5463 |
| 0.2345 | 9.49 | 2800 | 0.7280 | 0.5124 |
| 0.1993 | 10.85 | 3200 | 0.7289 | 0.4690 |
| 0.1617 | 12.2 | 3600 | 0.7431 | 0.4733 |
| 0.1432 | 13.56 | 4000 | 0.7448 | 0.4733 |
| 0.1432 | 14.92 | 4400 | 0.7746 | 0.4485 |
| 0.1172 | 16.27 | 4800 | 0.7589 | 0.4742 |
| 0.1035 | 17.63 | 5200 | 0.7539 | 0.4353 |
| 0.0956 | 18.98 | 5600 | 0.7648 | 0.4495 |
| 0.0845 | 20.34 | 6000 | 0.7877 | 0.4719 |
| 0.0845 | 21.69 | 6400 | 0.7884 | 0.4434 |
| 0.0761 | 23.05 | 6800 | 0.7796 | 0.4386 |
| 0.0634 | 24.41 | 7200 | 0.7729 | 0.4306 |
| 0.0571 | 25.76 | 7600 | 0.7826 | 0.4298 |
| 0.0508 | 27.12 | 8000 | 0.7805 | 0.4340 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 39.21, "name": "Test WER"}]}]}]}
|
kapilkd13/xls-r-300m-hi-prod
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7805
* Wer: 0.4340
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 8000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.1+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7346
- Wer: 1.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.36 | 400 | 1.4595 | 1.0039 |
| 4.7778 | 2.71 | 800 | 0.8082 | 1.0115 |
| 0.6408 | 4.07 | 1200 | 0.7032 | 1.0079 |
| 0.3937 | 5.42 | 1600 | 0.6889 | 1.0433 |
| 0.3 | 6.78 | 2000 | 0.6820 | 1.0069 |
| 0.3 | 8.14 | 2400 | 0.6670 | 1.0196 |
| 0.226 | 9.49 | 2800 | 0.7216 | 1.0422 |
| 0.197 | 10.85 | 3200 | 0.7669 | 1.0534 |
| 0.165 | 12.2 | 3600 | 0.7517 | 1.0200 |
| 0.1486 | 13.56 | 4000 | 0.7125 | 1.0357 |
| 0.1486 | 14.92 | 4400 | 0.7447 | 1.0347 |
| 0.122 | 16.27 | 4800 | 0.6899 | 1.0440 |
| 0.1069 | 17.63 | 5200 | 0.7212 | 1.0350 |
| 0.0961 | 18.98 | 5600 | 0.7417 | 1.0408 |
| 0.086 | 20.34 | 6000 | 0.7402 | 1.0356 |
| 0.086 | 21.69 | 6400 | 0.7761 | 1.0420 |
| 0.0756 | 23.05 | 6800 | 0.7346 | 1.0369 |
| 0.0666 | 24.41 | 7200 | 0.7506 | 1.0449 |
| 0.0595 | 25.76 | 7600 | 0.7319 | 1.0476 |
| 0.054 | 27.12 | 8000 | 0.7346 | 1.0479 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "generated_from_trainer", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 38.18, "name": "Test WER"}]}]}]}
|
kapilkd13/xls-r-hi-test
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #robust-speech-event #generated_from_trainer #hf-asr-leaderboard #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7346
* Wer: 1.0479
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 8000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #robust-speech-event #generated_from_trainer #hf-asr-leaderboard #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0749 | 1.0 | 5533 | 1.0167 |
| 0.7851 | 2.0 | 11066 | 1.0299 |
| 0.6067 | 3.0 | 16599 | 1.0725 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1
- Datasets 1.16.1
- Tokenizers 0.10.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model_index": [{"name": "bert-base-uncased-finetuned-squad", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad", "type": "squad", "args": "plain_text"}}]}]}
|
kaporter/bert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-squad
=================================
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0725
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.8.1
* Pytorch 1.8.1
* Datasets 1.16.1
* Tokenizers 0.10.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.1\n* Pytorch 1.8.1\n* Datasets 1.16.1\n* Tokenizers 0.10.1"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.1\n* Pytorch 1.8.1\n* Datasets 1.16.1\n* Tokenizers 0.10.1"
] |
null | null |
https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c
|
{}
|
katoensp/GG-12
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
# Hello World!
This is a dummy repository.
Can be deleted.
|
{}
|
katrin-kc/dummy2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Hello World!
This is a dummy repository.
Can be deleted.
|
[
"# Hello World!\n\nThis is a dummy repository.\nCan be deleted."
] |
[
"TAGS\n#region-us \n",
"# Hello World!\n\nThis is a dummy repository.\nCan be deleted."
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned
This model is a fine-tuned version of [kazandaev/opus-mt-en-ru-finetuned](https://huggingface.co/kazandaev/opus-mt-en-ru-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7763
- Bleu: 41.0065
- Gen Len: 29.7548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.6903 | 1.0 | 35147 | 0.7779 | 40.9223 | 29.7846 |
| 0.6999 | 2.0 | 70294 | 0.7776 | 40.8267 | 29.8421 |
| 0.7257 | 3.0 | 105441 | 0.7769 | 40.8549 | 29.8765 |
| 0.7238 | 4.0 | 140588 | 0.7763 | 41.0225 | 29.7129 |
| 0.7313 | 5.0 | 175735 | 0.7763 | 41.0065 | 29.7548 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ru-finetuned", "results": []}]}
|
kazandaev/opus-mt-en-ru-finetuned
| null |
[
"transformers",
"pytorch",
"tensorboard",
"rust",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #rust #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-en-ru-finetuned
=======================
This model is a fine-tuned version of kazandaev/opus-mt-en-ru-finetuned on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7763
* Bleu: 41.0065
* Gen Len: 29.7548
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 49
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 49\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #rust #marian #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 49\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned
This model is a fine-tuned version of [kazandaev/opus-mt-ru-en-finetuned](https://huggingface.co/kazandaev/opus-mt-ru-en-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0399
- Bleu: 43.5078
- Gen Len: 26.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 49
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.7462 | 1.0 | 35147 | 1.0422 | 43.3884 | 26.1742 |
| 0.7501 | 2.0 | 70294 | 1.0407 | 43.5296 | 26.1671 |
| 0.7471 | 3.0 | 105441 | 1.0402 | 43.5133 | 26.1118 |
| 0.7514 | 4.0 | 140588 | 1.0401 | 43.492 | 26.1529 |
| 0.7565 | 5.0 | 175735 | 1.0399 | 43.5078 | 26.1256 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-ru-en-finetuned", "results": []}]}
|
kazandaev/opus-mt-ru-en-finetuned
| null |
[
"transformers",
"pytorch",
"tensorboard",
"rust",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #rust #marian #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-ru-en-finetuned
=======================
This model is a fine-tuned version of kazandaev/opus-mt-ru-en-finetuned on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0399
* Bleu: 43.5078
* Gen Len: 26.1256
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 49
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 49\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #rust #marian #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 49\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["kbhugging/autonlp-data-text2sql"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1.4091714704861447}
|
kbhugging/autonlp-text2sql-18413376
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:kbhugging/autonlp-data-text2sql",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autonlp #unk #dataset-kbhugging/autonlp-data-text2sql #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 18413376\n- CO2 Emissions (in grams): 1.4091714704861447",
"## Validation Metrics\n\n- Loss: 0.26672711968421936\n- Rouge1: 61.765\n- Rouge2: 52.5778\n- RougeL: 61.3222\n- RougeLsum: 61.1905\n- Gen Len: 18.7805",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autonlp #unk #dataset-kbhugging/autonlp-data-text2sql #co2_eq_emissions #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 18413376\n- CO2 Emissions (in grams): 1.4091714704861447",
"## Validation Metrics\n\n- Loss: 0.26672711968421936\n- Rouge1: 61.765\n- Rouge2: 52.5778\n- RougeL: 61.3222\n- RougeLsum: 61.1905\n- Gen Len: 18.7805",
"## Usage\n\nYou can use cURL to access this model:"
] |
text-generation
|
transformers
|
# DIO DialoGPT Model
|
{"tags": ["conversational"]}
|
kche0138/DialoGPT-medium-DIO
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DIO DialoGPT Model
|
[
"# DIO DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DIO DialoGPT Model"
] |
null |
transformers
|
## References
- [koGPT2](https://github.com/SKT-AI/KoGPT2)
- [koGPT2-chatbot](https://github.com/haven-jeon/KoGPT2-chatbot)
|
{}
|
kco4776/kogpt-chat
| null |
[
"transformers",
"pytorch",
"gpt2",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #endpoints_compatible #has_space #text-generation-inference #region-us
|
## References
- koGPT2
- koGPT2-chatbot
|
[
"## References\n- koGPT2\n- koGPT2-chatbot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## References\n- koGPT2\n- koGPT2-chatbot"
] |
text-classification
|
transformers
|
## References
- [Soongsil-BERT](https://github.com/jason9693/Soongsil-BERT)
|
{}
|
kco4776/soongsil-bert-wellness
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
## References
- Soongsil-BERT
|
[
"## References\n- Soongsil-BERT"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## References\n- Soongsil-BERT"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9235
- Matthews Correlation: 0.6016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4906 | 1.0 | 535 | 0.5046 | 0.5080 |
| 0.2901 | 2.0 | 1070 | 0.5881 | 0.5235 |
| 0.1818 | 3.0 | 1605 | 0.7253 | 0.5584 |
| 0.1177 | 4.0 | 2140 | 0.8316 | 0.5927 |
| 0.0826 | 5.0 | 2675 | 0.9235 | 0.6016 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola-2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6015706950519473, "name": "Matthews Correlation"}]}]}]}
|
kdo6301/bert-base-uncased-finetuned-cola-2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-cola-2
==================================
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9235
* Matthews Correlation: 0.6016
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
- Matthews Correlation: 0.5640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4864 | 1.0 | 535 | 0.4689 | 0.5232 |
| 0.2864 | 2.0 | 1070 | 0.5835 | 0.5296 |
| 0.1884 | 3.0 | 1605 | 0.6953 | 0.5458 |
| 0.1263 | 4.0 | 2140 | 0.8082 | 0.5625 |
| 0.0832 | 5.0 | 2675 | 0.9089 | 0.5640 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5640063794282216, "name": "Matthews Correlation"}]}]}]}
|
kdo6301/bert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-cola
================================
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9089
* Matthews Correlation: 0.5640
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
sentence-similarity
|
sentence-transformers
|
# {vietnamese-sbert}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Cô giáo đang ăn kem", "Chị gái đang thử món thịt dê"]
model = SentenceTransformer('keepitreal/vietnamese-sbert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Cô giáo đang ăn kem', 'Chị gái đang thử món thịt dê']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained(''keepitreal/vietnamese-sbert')
model = AutoModel.from_pretrained('keepitreal/vietnamese-sbert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "vietnamese"], "pipeline_tag": "sentence-similarity"}
|
keepitreal/vietnamese-sbert
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"vietnamese",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #vietnamese #endpoints_compatible #has_space #region-us
|
# {vietnamese-sbert}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 360 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# {vietnamese-sbert}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #vietnamese #endpoints_compatible #has_space #region-us \n",
"# {vietnamese-sbert}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search on Vietnamese language.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 360 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
fill-mask
|
transformers
|
## albert-base-japanese-v1
日本語事前学習済みALBERTモデルです
## How to use
### ファインチューニング
このモデルはPreTrainedモデルです
基本的には各種タスク用にファインチューニングして使用されることを想定しています
### Fill-Mask
このモデルではTokenizerにSentencepieceを利用しています
そのままでは`[MASK]`トークンのあとに[余計なトークンが混入する問題](https://ken11.jp/blog/sentencepiece-tokenizer-bug)があるので、利用する際には以下のようにする必要があります
#### for PyTorch
```py
from transformers import (
AlbertForMaskedLM, AlbertTokenizerFast
)
import torch
tokenizer = AlbertTokenizerFast.from_pretrained("ken11/albert-base-japanese-v1")
model = AlbertForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1")
text = "大学で[MASK]の研究をしています"
tokenized_text = tokenizer.tokenize(text)
del tokenized_text[tokenized_text.index(tokenizer.mask_token) + 1]
input_ids = [tokenizer.cls_token_id]
input_ids.extend(tokenizer.convert_tokens_to_ids(tokenized_text))
input_ids.append(tokenizer.sep_token_id)
inputs = {"input_ids": [input_ids], "token_type_ids": [[0]*len(input_ids)], "attention_mask": [[1]*len(input_ids)]}
batch = {k: torch.tensor(v, dtype=torch.int64) for k, v in inputs.items()}
output = model(**batch)[0]
_, result = output[0, input_ids.index(tokenizer.mask_token_id)].topk(5)
print(tokenizer.convert_ids_to_tokens(result.tolist()))
# ['英語', '心理学', '数学', '医学', '日本語']
```
#### for TensorFlow
```py
from transformers import (
TFAlbertForMaskedLM, AlbertTokenizerFast
)
import tensorflow as tf
tokenizer = AlbertTokenizerFast.from_pretrained("ken11/albert-base-japanese-v1")
model = TFAlbertForMaskedLM.from_pretrained("ken11/albert-base-japanese-v1")
text = "大学で[MASK]の研究をしています"
tokenized_text = tokenizer.tokenize(text)
del tokenized_text[tokenized_text.index(tokenizer.mask_token) + 1]
input_ids = [tokenizer.cls_token_id]
input_ids.extend(tokenizer.convert_tokens_to_ids(tokenized_text))
input_ids.append(tokenizer.sep_token_id)
inputs = {"input_ids": [input_ids], "token_type_ids": [[0]*len(input_ids)], "attention_mask": [[1]*len(input_ids)]}
batch = {k: tf.convert_to_tensor(v, dtype=tf.int32) for k, v in inputs.items()}
output = model(**batch)[0]
result = tf.math.top_k(output[0, input_ids.index(tokenizer.mask_token_id)], k=5)
print(tokenizer.convert_ids_to_tokens(result.indices.numpy()))
# ['英語', '心理学', '数学', '医学', '日本語']
```
## Training Data
学習には
- [日本語Wikipediaの全文](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89)
- [livedoorニュースコーパス](https://www.rondhuit.com/download.html#ldcc)
を利用しています
## Tokenizer
トークナイザーは[Sentencepiece](https://github.com/google/sentencepiece)を利用しています
こちらも学習データは同様です
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
{"language": ["ja"], "license": "mit", "tags": ["fill-mask", "japanese", "albert"], "widget": [{"text": "2022\u5e74\u306e[MASK]\u6982\u8981"}]}
|
ken11/albert-base-japanese-v1
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"japanese",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #tf #albert #fill-mask #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## albert-base-japanese-v1
日本語事前学習済みALBERTモデルです
## How to use
### ファインチューニング
このモデルはPreTrainedモデルです
基本的には各種タスク用にファインチューニングして使用されることを想定しています
### Fill-Mask
このモデルではTokenizerにSentencepieceを利用しています
そのままでは'[MASK]'トークンのあとに余計なトークンが混入する問題があるので、利用する際には以下のようにする必要があります
#### for PyTorch
#### for TensorFlow
## Training Data
学習には
- 日本語Wikipediaの全文
- livedoorニュースコーパス
を利用しています
## Tokenizer
トークナイザーはSentencepieceを利用しています
こちらも学習データは同様です
## Licenese
The MIT license
|
[
"## albert-base-japanese-v1\n日本語事前学習済みALBERTモデルです",
"## How to use",
"### ファインチューニング\nこのモデルはPreTrainedモデルです \n基本的には各種タスク用にファインチューニングして使用されることを想定しています",
"### Fill-Mask\nこのモデルではTokenizerにSentencepieceを利用しています \nそのままでは'[MASK]'トークンのあとに余計なトークンが混入する問題があるので、利用する際には以下のようにする必要があります",
"#### for PyTorch",
"#### for TensorFlow",
"## Training Data\n学習には\n- 日本語Wikipediaの全文\n- livedoorニュースコーパス\n\nを利用しています",
"## Tokenizer\nトークナイザーはSentencepieceを利用しています \nこちらも学習データは同様です",
"## Licenese\nThe MIT license"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## albert-base-japanese-v1\n日本語事前学習済みALBERTモデルです",
"## How to use",
"### ファインチューニング\nこのモデルはPreTrainedモデルです \n基本的には各種タスク用にファインチューニングして使用されることを想定しています",
"### Fill-Mask\nこのモデルではTokenizerにSentencepieceを利用しています \nそのままでは'[MASK]'トークンのあとに余計なトークンが混入する問題があるので、利用する際には以下のようにする必要があります",
"#### for PyTorch",
"#### for TensorFlow",
"## Training Data\n学習には\n- 日本語Wikipediaの全文\n- livedoorニュースコーパス\n\nを利用しています",
"## Tokenizer\nトークナイザーはSentencepieceを利用しています \nこちらも学習データは同様です",
"## Licenese\nThe MIT license"
] |
token-classification
|
transformers
|
## bert-japanese-ner
このモデルは日本語の固有表現抽出タスクを目的として、[京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデル](https://nlp.ist.i.kyoto-u.ac.jp/?ku_bert_japanese)をベースに[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)でファインチューニングしたものです。
## How to use
このモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。
当リポジトリにTokenizerは含まれていません。
利用する際は別途ダウンロードしてご用意ください。
また、Tokenizerとは別に[Juman++](https://nlp.ist.i.kyoto-u.ac.jp/?JUMAN%2B%2B)と[pyknp](https://nlp.ist.i.kyoto-u.ac.jp/?PyKNP)を利用します。
予めインストールしておいてください。
```py
from transformers import (
BertForTokenClassification, BertTokenizer
)
from pyknp import Juman
jumanpp = Juman()
tokenizer = BertTokenizer.from_pretrained("ダウンロードした京都大学のTokenizerのファイルパス")
model = BertForTokenClassification.from_pretrained("ken11/bert-japanese-ner")
text = "なにか文章"
juman_result = jumanpp.analysis(text)
tokenized_text = [mrph.midasi for mrph in juman_result.mrph_list()]
inputs = tokenizer(tokenized_text, return_tensors="pt", padding='max_length', truncation=True, max_length=64, is_split_into_words=True)
pred = model(**inputs).logits[0]
pred = np.argmax(pred.detach().numpy(), axis=-1)
labels = []
for i, label in enumerate(pred):
if i + 1 > len(tokenized_text):
continue
labels.append(model.config.id2label[label])
print(f"{tokenized_text[i]}: {model.config.id2label[label]}")
print(tokenized_text)
print(labels)
```
## Training Data
学習には[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)を利用しました。
便利なデータセットを公開していただきありがとうございます。
## Note
固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。
ラベルの詳細については[ner-wikipedia-datasetの概要](https://github.com/stockmarkteam/ner-wikipedia-dataset#%E6%A6%82%E8%A6%81)をご確認ください。
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
{"language": ["ja"], "license": "mit", "tags": ["ner", "token-classification", "japanese", "bert"]}
|
ken11/bert-japanese-ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ner",
"japanese",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #bert #token-classification #ner #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## bert-japanese-ner
このモデルは日本語の固有表現抽出タスクを目的として、京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデルをベースにストックマーク株式会社が公開しているner-wikipedia-datasetでファインチューニングしたものです。
## How to use
このモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。
当リポジトリにTokenizerは含まれていません。
利用する際は別途ダウンロードしてご用意ください。
また、Tokenizerとは別にJuman++とpyknpを利用します。
予めインストールしておいてください。
## Training Data
学習にはストックマーク株式会社が公開しているner-wikipedia-datasetを利用しました。
便利なデータセットを公開していただきありがとうございます。
## Note
固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。
ラベルの詳細についてはner-wikipedia-datasetの概要をご確認ください。
## Licenese
The MIT license
|
[
"## bert-japanese-ner\nこのモデルは日本語の固有表現抽出タスクを目的として、京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデルをベースにストックマーク株式会社が公開しているner-wikipedia-datasetでファインチューニングしたものです。",
"## How to use\nこのモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。 \n当リポジトリにTokenizerは含まれていません。 \n利用する際は別途ダウンロードしてご用意ください。 \n \nまた、Tokenizerとは別にJuman++とpyknpを利用します。 \n予めインストールしておいてください。",
"## Training Data\n学習にはストックマーク株式会社が公開しているner-wikipedia-datasetを利用しました。 \n便利なデータセットを公開していただきありがとうございます。",
"## Note\n固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。 \nラベルの詳細についてはner-wikipedia-datasetの概要をご確認ください。",
"## Licenese\nThe MIT license"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #ner #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## bert-japanese-ner\nこのモデルは日本語の固有表現抽出タスクを目的として、京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデルをベースにストックマーク株式会社が公開しているner-wikipedia-datasetでファインチューニングしたものです。",
"## How to use\nこのモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。 \n当リポジトリにTokenizerは含まれていません。 \n利用する際は別途ダウンロードしてご用意ください。 \n \nまた、Tokenizerとは別にJuman++とpyknpを利用します。 \n予めインストールしておいてください。",
"## Training Data\n学習にはストックマーク株式会社が公開しているner-wikipedia-datasetを利用しました。 \n便利なデータセットを公開していただきありがとうございます。",
"## Note\n固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。 \nラベルの詳細についてはner-wikipedia-datasetの概要をご確認ください。",
"## Licenese\nThe MIT license"
] |
translation
|
transformers
|
## mbart-ja-en
このモデルは[facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)をベースに[JESC dataset](https://nlp.stanford.edu/projects/jesc/index_ja.html)でファインチューニングしたものです。
This model is based on [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) and fine-tuned with [JESC dataset](https://nlp.stanford.edu/projects/jesc/index_ja.html).
## How to use
```py
from transformers import (
MBartForConditionalGeneration, MBartTokenizer
)
tokenizer = MBartTokenizer.from_pretrained("ken11/mbart-ja-en")
model = MBartForConditionalGeneration.from_pretrained("ken11/mbart-ja-en")
inputs = tokenizer("こんにちは", return_tensors="pt")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"], early_stopping=True, max_length=48)
pred = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
print(pred)
```
## Training Data
I used the [JESC dataset](https://nlp.stanford.edu/projects/jesc/index_ja.html) for training.
Thank you for publishing such a large dataset.
## Tokenizer
The tokenizer uses the [sentencepiece](https://github.com/google/sentencepiece) trained on the JESC dataset.
## Note
The result of evaluating the sacrebleu score for [JEC Basic Sentence Data of Kyoto University](https://nlp.ist.i.kyoto-u.ac.jp/EN/?JEC+Basic+Sentence+Data#i0163896) was `18.18` .
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
{"language": ["ja", "en"], "license": "mit", "tags": ["translation", "japanese"], "widget": [{"text": "\u4eca\u65e5\u3082\u3054\u5b89\u5168\u306b"}]}
|
ken11/mbart-ja-en
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"japanese",
"ja",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja",
"en"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #translation #japanese #ja #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## mbart-ja-en
このモデルはfacebook/mbart-large-cc25をベースにJESC datasetでファインチューニングしたものです。
This model is based on facebook/mbart-large-cc25 and fine-tuned with JESC dataset.
## How to use
## Training Data
I used the JESC dataset for training.
Thank you for publishing such a large dataset.
## Tokenizer
The tokenizer uses the sentencepiece trained on the JESC dataset.
## Note
The result of evaluating the sacrebleu score for JEC Basic Sentence Data of Kyoto University was '18.18' .
## Licenese
The MIT license
|
[
"## mbart-ja-en\nこのモデルはfacebook/mbart-large-cc25をベースにJESC datasetでファインチューニングしたものです。 \nThis model is based on facebook/mbart-large-cc25 and fine-tuned with JESC dataset.",
"## How to use",
"## Training Data\nI used the JESC dataset for training. \nThank you for publishing such a large dataset.",
"## Tokenizer\nThe tokenizer uses the sentencepiece trained on the JESC dataset.",
"## Note\nThe result of evaluating the sacrebleu score for JEC Basic Sentence Data of Kyoto University was '18.18' .",
"## Licenese\nThe MIT license"
] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #translation #japanese #ja #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## mbart-ja-en\nこのモデルはfacebook/mbart-large-cc25をベースにJESC datasetでファインチューニングしたものです。 \nThis model is based on facebook/mbart-large-cc25 and fine-tuned with JESC dataset.",
"## How to use",
"## Training Data\nI used the JESC dataset for training. \nThank you for publishing such a large dataset.",
"## Tokenizer\nThe tokenizer uses the sentencepiece trained on the JESC dataset.",
"## Note\nThe result of evaluating the sacrebleu score for JEC Basic Sentence Data of Kyoto University was '18.18' .",
"## Licenese\nThe MIT license"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
kenlevine/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
null | null |
This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) .
Simply run the following code:
```python
from pyctcdecode import LanguageModel
language_model = LanguageModel.load_from_hf_hub("kensho/5gram-spanish-kenLM")
```
The model was trained by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes.
|
{}
|
kensho/5gram-spanish-kenLM
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is an example of how a kenLM model can be downloaded with PyCTCDecode .
Simply run the following code:
The model was trained by Patrick von Platen for demonstration purposes.
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) .
Simply run the following code:
```python
from pyctcdecode import BeamSearchDecoderCTC
decoder = BeamSearchDecoderCTC.load_from_hf_hub("kensho/beamsearch_decoder_dummy")
```
The model was created by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes.
|
{}
|
kensho/beamsearch_decoder_dummy
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is an example of how a kenLM model can be downloaded with PyCTCDecode .
Simply run the following code:
The model was created by Patrick von Platen for demonstration purposes.
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) .
Simply run the following code:
```python
from pyctcdecode import LanguageModel
language_model = LanguageModel.load_from_hf_hub("kensho/dummy_full_language_model")
```
The model was created by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes.
|
{}
|
kensho/dummy_full_language_model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This is an example of how a kenLM model can be downloaded with PyCTCDecode .
Simply run the following code:
The model was created by Patrick von Platen for demonstration purposes.
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
Used for testing of [`pyctcdecode`](https://github.com/kensho-technologies/pyctcdecode).
|
{}
|
kensho/testing_dummy_kenlm
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Used for testing of 'pyctcdecode'.
|
[] |
[
"TAGS\n#region-us \n"
] |
null |
keras
|
## Keras Implementation of CycleGAN model using [Horse to Zebra dataset](https://www.tensorflow.org/datasets/catalog/cycle_gan#cycle_ganhorse2zebra) 🐴 -> 🦓
This repo contains the model and the notebook [to this Keras example on CycleGAN](https://keras.io/examples/generative/cyclegan/).
Full credits to: [Aakash Kumar Nain](https://twitter.com/A_K_Nain)
## Background Information
CycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks.

|
{"license": ["cc0-1.0"], "tags": ["gan", "computer vision", "horse to zebra"]}
|
keras-io/CycleGAN
| null |
[
"keras",
"gan",
"computer vision",
"horse to zebra",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #gan #computer vision #horse to zebra #license-cc0-1.0 #has_space #region-us
|
## Keras Implementation of CycleGAN model using Horse to Zebra dataset ->
This repo contains the model and the notebook to this Keras example on CycleGAN.
Full credits to: Aakash Kumar Nain
## Background Information
CycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks.
!CycleGAN
|
[
"## Keras Implementation of CycleGAN model using Horse to Zebra dataset -> \n\nThis repo contains the model and the notebook to this Keras example on CycleGAN.\n\nFull credits to: Aakash Kumar Nain",
"## Background Information \nCycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks.\n!CycleGAN"
] |
[
"TAGS\n#keras #gan #computer vision #horse to zebra #license-cc0-1.0 #has_space #region-us \n",
"## Keras Implementation of CycleGAN model using Horse to Zebra dataset -> \n\nThis repo contains the model and the notebook to this Keras example on CycleGAN.\n\nFull credits to: Aakash Kumar Nain",
"## Background Information \nCycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks.\n!CycleGAN"
] |
image-classification
|
generic
|
## Image-Classification-using-EANet with Keras
This repo contains the model and the notebook on [Image Classification using EANet with Keras](https://keras.io/examples/vision/eanet/).
Credits: [ZhiYong Chang](https://github.com/czy00000) - Original Author
HF Contribution: [Drishti Sharma](https://huggingface.co/spaces/DrishtiSharma)
### Introduction
This example implements the EANet model for image classification, and demonstrates it on the [CIFAR-100](https://huggingface.co/datasets/cifar100) dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples.
### Implemention of the EANet model
The EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N ** 2), where d is the embedding size, and N is the number of patch. The authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant.
|
{"language": ["en"], "license": "apache-2.0", "library_name": "generic", "tags": ["keras", "tensorflow", "image-classification"], "metrics": ["accuracy"], "libraries": "TensorBoard", "model-index": [{"name": "Image-Classification-using-EANet", "results": [{"task": {"type": "Image-Classification-using-EANet"}, "dataset": {"name": "CIFAR100", "type": "Image"}, "metrics": [{"type": "accuracy", "value": []}, {"type": "validation loss", "value": []}]}]}]}
|
keras-io/Image-Classification-using-EANet
| null |
[
"generic",
"tensorboard",
"keras",
"tensorflow",
"image-classification",
"en",
"license:apache-2.0",
"model-index",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#generic #tensorboard #keras #tensorflow #image-classification #en #license-apache-2.0 #model-index #has_space #region-us
|
## Image-Classification-using-EANet with Keras
This repo contains the model and the notebook on Image Classification using EANet with Keras.
Credits: ZhiYong Chang - Original Author
HF Contribution: Drishti Sharma
### Introduction
This example implements the EANet model for image classification, and demonstrates it on the CIFAR-100 dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples.
### Implemention of the EANet model
The EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N 2), where d is the embedding size, and N is the number of patch. The authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant.
|
[
"## Image-Classification-using-EANet with Keras\n\nThis repo contains the model and the notebook on Image Classification using EANet with Keras.\n\nCredits: ZhiYong Chang - Original Author\n\nHF Contribution: Drishti Sharma",
"### Introduction\n\nThis example implements the EANet model for image classification, and demonstrates it on the CIFAR-100 dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples.",
"### Implemention of the EANet model\n\nThe EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N 2), where d is the embedding size, and N is the number of patch. The authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant."
] |
[
"TAGS\n#generic #tensorboard #keras #tensorflow #image-classification #en #license-apache-2.0 #model-index #has_space #region-us \n",
"## Image-Classification-using-EANet with Keras\n\nThis repo contains the model and the notebook on Image Classification using EANet with Keras.\n\nCredits: ZhiYong Chang - Original Author\n\nHF Contribution: Drishti Sharma",
"### Introduction\n\nThis example implements the EANet model for image classification, and demonstrates it on the CIFAR-100 dataset. EANet introduces a novel attention mechanism named external attention, based on two external, small, learnable, and shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers. It conveniently replaces self-attention as used in existing architectures. External attention has linear complexity, as it only implicitly considers the correlations between all samples.",
"### Implemention of the EANet model\n\nThe EANet model leverages external attention. The computational complexity of traditional self attention is O(d * N 2), where d is the embedding size, and N is the number of patch. The authors find that most pixels are closely related to just a few other pixels, and an N-to-N attention matrix may be redundant. So, they propose as an alternative an external attention module where the computational complexity of external attention is O(d * S * N). As d and S are hyper-parameters, the proposed algorithm is linear in the number of pixels. In fact, this is equivalent to a drop patch operation, because a lot of information contained in a patch in an image is redundant and unimportant."
] |
tabular-classification
|
keras
|
# TensorFlow's Gradient Boosted Trees Model for structured data classification
Use TF's Gradient Boosted Trees model in binary classification of structured data <br />
* Build a decision forests model by specifying the input feature usage.
* Implement a custom Binary Target encoder as a Keras Preprocessing layer to encode the categorical features with respect to their target value co-occurrences, and then use the encoded features to build a decision forests model.<br />
The model is implemented using Tensorflow 7.0 or higher. The US Census Income Dataset containing approximately 300k instances with 41 numerical and categorical variables was used to train it. This is a binary classification problem to determine whether a person makes over 50k a year.<br />
Author: Khalid Salama
Adapted implementation: Tannia Dubon
Find the colab notebook at https://github.com/tdubon/TF-GB-Forest/blob/c0cf4c7e3e29d819b996cfe4eecc1f2728115e52/TFDecisionTrees_Final.ipynb
|
{"license": "apache-2.0", "library_name": "keras", "tags": ["tabular-classification", "keras", "tensorflow"], "metrics": ["accuracy"], "model-index": [{"name": "TF_Decision_Trees", "results": [{"task": {"type": "structured-data-classification"}, "dataset": {"name": "Census-Income Data Set", "type": "census"}, "metrics": [{"type": "accuracy", "value": 96.57}, {"type": "validation loss", "value": 0.227394}]}]}]}
|
keras-io/TF_Decision_Trees
| null |
[
"keras",
"tensorboard",
"tabular-classification",
"tensorflow",
"license:apache-2.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #tensorboard #tabular-classification #tensorflow #license-apache-2.0 #model-index #region-us
|
# TensorFlow's Gradient Boosted Trees Model for structured data classification
Use TF's Gradient Boosted Trees model in binary classification of structured data <br />
* Build a decision forests model by specifying the input feature usage.
* Implement a custom Binary Target encoder as a Keras Preprocessing layer to encode the categorical features with respect to their target value co-occurrences, and then use the encoded features to build a decision forests model.<br />
The model is implemented using Tensorflow 7.0 or higher. The US Census Income Dataset containing approximately 300k instances with 41 numerical and categorical variables was used to train it. This is a binary classification problem to determine whether a person makes over 50k a year.<br />
Author: Khalid Salama
Adapted implementation: Tannia Dubon
Find the colab notebook at URL
|
[
"# TensorFlow's Gradient Boosted Trees Model for structured data classification\n\nUse TF's Gradient Boosted Trees model in binary classification of structured data <br />\n\n* Build a decision forests model by specifying the input feature usage.\n* Implement a custom Binary Target encoder as a Keras Preprocessing layer to encode the categorical features with respect to their target value co-occurrences, and then use the encoded features to build a decision forests model.<br />\n \nThe model is implemented using Tensorflow 7.0 or higher. The US Census Income Dataset containing approximately 300k instances with 41 numerical and categorical variables was used to train it. This is a binary classification problem to determine whether a person makes over 50k a year.<br /> \n\nAuthor: Khalid Salama \nAdapted implementation: Tannia Dubon\nFind the colab notebook at URL"
] |
[
"TAGS\n#keras #tensorboard #tabular-classification #tensorflow #license-apache-2.0 #model-index #region-us \n",
"# TensorFlow's Gradient Boosted Trees Model for structured data classification\n\nUse TF's Gradient Boosted Trees model in binary classification of structured data <br />\n\n* Build a decision forests model by specifying the input feature usage.\n* Implement a custom Binary Target encoder as a Keras Preprocessing layer to encode the categorical features with respect to their target value co-occurrences, and then use the encoded features to build a decision forests model.<br />\n \nThe model is implemented using Tensorflow 7.0 or higher. The US Census Income Dataset containing approximately 300k instances with 41 numerical and categorical variables was used to train it. This is a binary classification problem to determine whether a person makes over 50k a year.<br /> \n\nAuthor: Khalid Salama \nAdapted implementation: Tannia Dubon\nFind the colab notebook at URL"
] |
text-classification
|
keras
|
## Keras Implementation of Bidirectional LSTMs for Sentiment Analysis on IMDB 🍿🎥
This repo contains the model and the notebook [on Bidirectional LSTMs for Sentiment Analysis on IMDB](https://keras.io/examples/nlp/bidirectional_lstm_imdb/).
Full credits to: [François Chollet](https://github.com/fchollet)
HF Contribution: [Drishti Sharma](https://huggingface.co/DrishtiSharma)
### Metrics after 10 epochs:
- train_loss: 0.2085
- train_acc: 0.9194
- val_loss: 0.3019
- val_acc: 0.8778
|
{"language": ["en"], "tags": ["text-classification"], "datasets": ["imdb"], "widget": [{"text": "I like that movie, but I'm not sure if it's my favorite."}]}
|
keras-io/bidirectional-lstm-imdb
| null |
[
"keras",
"text-classification",
"en",
"dataset:imdb",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#keras #text-classification #en #dataset-imdb #has_space #region-us
|
## Keras Implementation of Bidirectional LSTMs for Sentiment Analysis on IMDB
This repo contains the model and the notebook on Bidirectional LSTMs for Sentiment Analysis on IMDB.
Full credits to: François Chollet
HF Contribution: Drishti Sharma
### Metrics after 10 epochs:
- train_loss: 0.2085
- train_acc: 0.9194
- val_loss: 0.3019
- val_acc: 0.8778
|
[
"## Keras Implementation of Bidirectional LSTMs for Sentiment Analysis on IMDB \n\n\nThis repo contains the model and the notebook on Bidirectional LSTMs for Sentiment Analysis on IMDB.\n\nFull credits to: François Chollet\n\nHF Contribution: Drishti Sharma",
"### Metrics after 10 epochs:\n- train_loss: 0.2085\n- train_acc: 0.9194\n- val_loss: 0.3019\n- val_acc: 0.8778"
] |
[
"TAGS\n#keras #text-classification #en #dataset-imdb #has_space #region-us \n",
"## Keras Implementation of Bidirectional LSTMs for Sentiment Analysis on IMDB \n\n\nThis repo contains the model and the notebook on Bidirectional LSTMs for Sentiment Analysis on IMDB.\n\nFull credits to: François Chollet\n\nHF Contribution: Drishti Sharma",
"### Metrics after 10 epochs:\n- train_loss: 0.2085\n- train_acc: 0.9194\n- val_loss: 0.3019\n- val_acc: 0.8778"
] |
translation
|
keras
|
## Keras Implementation of Character-level recurrent sequence-to-sequence model
This repo contains the model and the notebook [to this Keras example on Character-level recurrent sequence-to-sequence model](https://keras.io/examples/nlp/lstm_seq2seq/).
Full credits to: [fchollet](https://twitter.com/fchollet)
## Background Information
This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.
## Limitations
It works on text of length <= 15 characters
## Parameters needed for using the model
```python
latent_dim = 256
num_encoder_tokens = 71
max_encoder_seq_length = 15
num_decoder_tokens = 92
max_decoder_seq_length = 59
```
|
{"language": ["en", "fr"], "license": ["cc0-1.0"], "tags": ["seq2seq", "translation"]}
|
keras-io/char-lstm-seq2seq
| null |
[
"keras",
"seq2seq",
"translation",
"en",
"fr",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"fr"
] |
TAGS
#keras #seq2seq #translation #en #fr #license-cc0-1.0 #has_space #region-us
|
## Keras Implementation of Character-level recurrent sequence-to-sequence model
This repo contains the model and the notebook to this Keras example on Character-level recurrent sequence-to-sequence model.
Full credits to: fchollet
## Background Information
This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.
## Limitations
It works on text of length <= 15 characters
## Parameters needed for using the model
|
[
"## Keras Implementation of Character-level recurrent sequence-to-sequence model\n\nThis repo contains the model and the notebook to this Keras example on Character-level recurrent sequence-to-sequence model.\n\nFull credits to: fchollet",
"## Background Information \nThis example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.",
"## Limitations\nIt works on text of length <= 15 characters",
"## Parameters needed for using the model"
] |
[
"TAGS\n#keras #seq2seq #translation #en #fr #license-cc0-1.0 #has_space #region-us \n",
"## Keras Implementation of Character-level recurrent sequence-to-sequence model\n\nThis repo contains the model and the notebook to this Keras example on Character-level recurrent sequence-to-sequence model.\n\nFull credits to: fchollet",
"## Background Information \nThis example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.",
"## Limitations\nIt works on text of length <= 15 characters",
"## Parameters needed for using the model"
] |
image-to-image
|
keras
|
# Conditional Generative Adversarial Network
This repo contains the model and the notebook to [this Keras example on Conditional GAN](https://keras.io/examples/generative/conditional_gan/).
Full credits to: [Sayak Paul](https://twitter.com/RisingSayak)
# Background Information
Training a GAN conditioned on class labels to generate handwritten digits.
Generative Adversarial Networks (GANs) let us generate novel image data, video data, or audio data from a random input. Typically, the random input is sampled from a normal distribution, before going through a series of transformations that turn it into something plausible (image, video, audio, etc.).
However, a simple DCGAN doesn't let us control the appearance (e.g. class) of the samples we're generating. For instance, with a GAN that generates MNIST handwritten digits, a simple DCGAN wouldn't let us choose the class of digits we're generating. To be able to control what we generate, we need to condition the GAN output on a semantic input, such as the class of an image.
In this example, we'll build a Conditional GAN that can generate MNIST handwritten digits conditioned on a given class. Such a model can have various useful applications:
let's say you are dealing with an imbalanced image dataset, and you'd like to gather more examples for the skewed class to balance the dataset. Data collection can be a costly process on its own. You could instead train a Conditional GAN and use it to generate novel images for the class that needs balancing.
Since the generator learns to associate the generated samples with the class labels, its representations can also be used for other downstream tasks.
|
{"library_name": "keras", "tags": ["image-to-image"]}
|
keras-io/conditional-gan
| null |
[
"keras",
"tensorboard",
"image-to-image",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #tensorboard #image-to-image #has_space #region-us
|
# Conditional Generative Adversarial Network
This repo contains the model and the notebook to this Keras example on Conditional GAN.
Full credits to: Sayak Paul
# Background Information
Training a GAN conditioned on class labels to generate handwritten digits.
Generative Adversarial Networks (GANs) let us generate novel image data, video data, or audio data from a random input. Typically, the random input is sampled from a normal distribution, before going through a series of transformations that turn it into something plausible (image, video, audio, etc.).
However, a simple DCGAN doesn't let us control the appearance (e.g. class) of the samples we're generating. For instance, with a GAN that generates MNIST handwritten digits, a simple DCGAN wouldn't let us choose the class of digits we're generating. To be able to control what we generate, we need to condition the GAN output on a semantic input, such as the class of an image.
In this example, we'll build a Conditional GAN that can generate MNIST handwritten digits conditioned on a given class. Such a model can have various useful applications:
let's say you are dealing with an imbalanced image dataset, and you'd like to gather more examples for the skewed class to balance the dataset. Data collection can be a costly process on its own. You could instead train a Conditional GAN and use it to generate novel images for the class that needs balancing.
Since the generator learns to associate the generated samples with the class labels, its representations can also be used for other downstream tasks.
|
[
"# Conditional Generative Adversarial Network\nThis repo contains the model and the notebook to this Keras example on Conditional GAN.\n\nFull credits to: Sayak Paul",
"# Background Information\n\nTraining a GAN conditioned on class labels to generate handwritten digits.\n\nGenerative Adversarial Networks (GANs) let us generate novel image data, video data, or audio data from a random input. Typically, the random input is sampled from a normal distribution, before going through a series of transformations that turn it into something plausible (image, video, audio, etc.).\n\nHowever, a simple DCGAN doesn't let us control the appearance (e.g. class) of the samples we're generating. For instance, with a GAN that generates MNIST handwritten digits, a simple DCGAN wouldn't let us choose the class of digits we're generating. To be able to control what we generate, we need to condition the GAN output on a semantic input, such as the class of an image.\n\nIn this example, we'll build a Conditional GAN that can generate MNIST handwritten digits conditioned on a given class. Such a model can have various useful applications:\n\nlet's say you are dealing with an imbalanced image dataset, and you'd like to gather more examples for the skewed class to balance the dataset. Data collection can be a costly process on its own. You could instead train a Conditional GAN and use it to generate novel images for the class that needs balancing.\nSince the generator learns to associate the generated samples with the class labels, its representations can also be used for other downstream tasks."
] |
[
"TAGS\n#keras #tensorboard #image-to-image #has_space #region-us \n",
"# Conditional Generative Adversarial Network\nThis repo contains the model and the notebook to this Keras example on Conditional GAN.\n\nFull credits to: Sayak Paul",
"# Background Information\n\nTraining a GAN conditioned on class labels to generate handwritten digits.\n\nGenerative Adversarial Networks (GANs) let us generate novel image data, video data, or audio data from a random input. Typically, the random input is sampled from a normal distribution, before going through a series of transformations that turn it into something plausible (image, video, audio, etc.).\n\nHowever, a simple DCGAN doesn't let us control the appearance (e.g. class) of the samples we're generating. For instance, with a GAN that generates MNIST handwritten digits, a simple DCGAN wouldn't let us choose the class of digits we're generating. To be able to control what we generate, we need to condition the GAN output on a semantic input, such as the class of an image.\n\nIn this example, we'll build a Conditional GAN that can generate MNIST handwritten digits conditioned on a given class. Such a model can have various useful applications:\n\nlet's say you are dealing with an imbalanced image dataset, and you'd like to gather more examples for the skewed class to balance the dataset. Data collection can be a costly process on its own. You could instead train a Conditional GAN and use it to generate novel images for the class that needs balancing.\nSince the generator learns to associate the generated samples with the class labels, its representations can also be used for other downstream tasks."
] |
null |
keras
|
## Tensorflow Keras Implementation of Next-Frame Video Prediction with Convolutional LSTMs 📽️
This repo contains the models and the notebook [on How to build and train a convolutional LSTM model for next-frame video prediction](https://keras.io/examples/vision/conv_lstm/).
Full credits to [Amogh Joshi](https://github.com/amogh7joshi)
## Background Information
The [Convolutional LSTM](https://papers.nips.cc/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf) architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. This model uses the Convolutional LSTMs in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames.

## Training Dataset
This model was trained on the [Moving MNIST dataset](http://www.cs.toronto.edu/~nitish/unsupervised_video/).
For next-frame prediction, our model will be using a previous frame, which we'll call `f_n`, to predict a new frame, called `f_(n + 1)`. To allow the model to create these predictions, we'll need to process the data such that we have "shifted" inputs and outputs, where the input data is frame `x_n`, being used to predict frame `y_(n + 1)`.

|
{"license": "cc0-1.0", "tags": ["video-prediction", "moving-mnist", "video-to-video"]}
|
keras-io/conv-lstm
| null |
[
"keras",
"video-prediction",
"moving-mnist",
"video-to-video",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #video-prediction #moving-mnist #video-to-video #license-cc0-1.0 #has_space #region-us
|
## Tensorflow Keras Implementation of Next-Frame Video Prediction with Convolutional LSTMs ️
This repo contains the models and the notebook on How to build and train a convolutional LSTM model for next-frame video prediction.
Full credits to Amogh Joshi
## Background Information
The Convolutional LSTM architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. This model uses the Convolutional LSTMs in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames.
!preview
## Training Dataset
This model was trained on the Moving MNIST dataset.
For next-frame prediction, our model will be using a previous frame, which we'll call 'f_n', to predict a new frame, called 'f_(n + 1)'. To allow the model to create these predictions, we'll need to process the data such that we have "shifted" inputs and outputs, where the input data is frame 'x_n', being used to predict frame 'y_(n + 1)'.
!result
|
[
"## Tensorflow Keras Implementation of Next-Frame Video Prediction with Convolutional LSTMs ️\n\nThis repo contains the models and the notebook on How to build and train a convolutional LSTM model for next-frame video prediction.\n\nFull credits to Amogh Joshi",
"## Background Information\nThe Convolutional LSTM architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. This model uses the Convolutional LSTMs in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames.\n\n!preview",
"## Training Dataset\nThis model was trained on the Moving MNIST dataset.\n\nFor next-frame prediction, our model will be using a previous frame, which we'll call 'f_n', to predict a new frame, called 'f_(n + 1)'. To allow the model to create these predictions, we'll need to process the data such that we have \"shifted\" inputs and outputs, where the input data is frame 'x_n', being used to predict frame 'y_(n + 1)'.\n\n!result"
] |
[
"TAGS\n#keras #video-prediction #moving-mnist #video-to-video #license-cc0-1.0 #has_space #region-us \n",
"## Tensorflow Keras Implementation of Next-Frame Video Prediction with Convolutional LSTMs ️\n\nThis repo contains the models and the notebook on How to build and train a convolutional LSTM model for next-frame video prediction.\n\nFull credits to Amogh Joshi",
"## Background Information\nThe Convolutional LSTM architectures bring together time series processing and computer vision by introducing a convolutional recurrent cell in a LSTM layer. This model uses the Convolutional LSTMs in an application to next-frame prediction, the process of predicting what video frames come next given a series of past frames.\n\n!preview",
"## Training Dataset\nThis model was trained on the Moving MNIST dataset.\n\nFor next-frame prediction, our model will be using a previous frame, which we'll call 'f_n', to predict a new frame, called 'f_(n + 1)'. To allow the model to create these predictions, we'll need to process the data such that we have \"shifted\" inputs and outputs, where the input data is frame 'x_n', being used to predict frame 'y_(n + 1)'.\n\n!result"
] |
null |
keras
|
# ConvMixer model
The ConvMixer model is trained on Cifar10 dataset and is based on [the paper](https://arxiv.org/abs/2201.09792v1), [github](https://github.com/locuslab/convmixer).
Disclaimer : This is a demo model for Sayak Paul's keras [example](https://keras.io/examples/vision/convmixer/). Please refrain from using this model for any other purpose.
## Description
The paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like [ViT](https://arxiv.org/abs/2010.11929v2). One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.
### Intended Use
This model is intended to be used as a demo model for keras-io.
|
{"language": "en", "license": "apache-2.0", "tags": ["ConvMixer", "keras-io"], "datasets": ["cifar10"]}
|
keras-io/convmixer
| null |
[
"keras",
"ConvMixer",
"keras-io",
"en",
"dataset:cifar10",
"arxiv:2201.09792",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2201.09792",
"2010.11929"
] |
[
"en"
] |
TAGS
#keras #ConvMixer #keras-io #en #dataset-cifar10 #arxiv-2201.09792 #arxiv-2010.11929 #license-apache-2.0 #region-us
|
# ConvMixer model
The ConvMixer model is trained on Cifar10 dataset and is based on the paper, github.
Disclaimer : This is a demo model for Sayak Paul's keras example. Please refrain from using this model for any other purpose.
## Description
The paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like ViT. One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.
### Intended Use
This model is intended to be used as a demo model for keras-io.
|
[
"# ConvMixer model\n\nThe ConvMixer model is trained on Cifar10 dataset and is based on the paper, github. \n\nDisclaimer : This is a demo model for Sayak Paul's keras example. Please refrain from using this model for any other purpose.",
"## Description\n\nThe paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like ViT. One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.",
"### Intended Use\n\nThis model is intended to be used as a demo model for keras-io."
] |
[
"TAGS\n#keras #ConvMixer #keras-io #en #dataset-cifar10 #arxiv-2201.09792 #arxiv-2010.11929 #license-apache-2.0 #region-us \n",
"# ConvMixer model\n\nThe ConvMixer model is trained on Cifar10 dataset and is based on the paper, github. \n\nDisclaimer : This is a demo model for Sayak Paul's keras example. Please refrain from using this model for any other purpose.",
"## Description\n\nThe paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like ViT. One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.",
"### Intended Use\n\nThis model is intended to be used as a demo model for keras-io."
] |
null |
keras
|
## Automatic Speech Recognition using CTC model on the 🤗Hub!
Full credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh]().
This repository contains the model from [this notebook on Automatic Speech Recognition using CTC](https://keras.io/examples/audio/ctc_asr/).
|
{"license": "cc0-1.0", "tags": ["speech recognition", "ctc"], "dataset": ["LJSpeech dataset"]}
|
keras-io/ctc_asr
| null |
[
"keras",
"speech recognition",
"ctc",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #speech recognition #ctc #license-cc0-1.0 #has_space #region-us
|
## Automatic Speech Recognition using CTC model on the Hub!
Full credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh]().
This repository contains the model from this notebook on Automatic Speech Recognition using CTC.
|
[
"## Automatic Speech Recognition using CTC model on the Hub! \nFull credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh]().\n\nThis repository contains the model from this notebook on Automatic Speech Recognition using CTC."
] |
[
"TAGS\n#keras #speech recognition #ctc #license-cc0-1.0 #has_space #region-us \n",
"## Automatic Speech Recognition using CTC model on the Hub! \nFull credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh]().\n\nThis repository contains the model from this notebook on Automatic Speech Recognition using CTC."
] |
null |
keras
|
## Keras Implementation of Deep Deterministic Policy Gradient ⏱🤖
This repo contains the model and the notebook [to this Keras example on Deep Deterministic Policy Gradient on pendulum](https://keras.io/examples/rl/ddpg_pendulum/).
Full credits to: [Hemant Singh](https://github.com/amifunny)

## Background Information
Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions.
It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces.
This tutorial closely follow this paper - Continuous control with deep reinforcement learning
We are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right.
What make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2.
Just like the Actor-Critic method, we have two networks:
Actor - It proposes an action given a state.
Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action.
DDPG uses two more techniques not present in the original DQN:
First, it uses two Target networks.
Why? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable.
Conceptually, this is like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better", as opposed to saying "I'm going to re-learn how to play this entire game after every move". See this StackOverflow answer.
Second, it uses Experience Replay.
We store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far.
|
{"license": ["cc0-1.0"], "tags": ["reinforcement learning", "cartpole", "deep deterministic policy gradient"]}
|
keras-io/deep-deterministic-policy-gradient
| null |
[
"keras",
"reinforcement learning",
"cartpole",
"deep deterministic policy gradient",
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #reinforcement learning #cartpole #deep deterministic policy gradient #license-cc0-1.0 #region-us
|
## Keras Implementation of Deep Deterministic Policy Gradient ⏱
This repo contains the model and the notebook to this Keras example on Deep Deterministic Policy Gradient on pendulum.
Full credits to: Hemant Singh
!pendulum_gif
## Background Information
Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions.
It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces.
This tutorial closely follow this paper - Continuous control with deep reinforcement learning
We are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right.
What make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2.
Just like the Actor-Critic method, we have two networks:
Actor - It proposes an action given a state.
Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action.
DDPG uses two more techniques not present in the original DQN:
First, it uses two Target networks.
Why? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable.
Conceptually, this is like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better", as opposed to saying "I'm going to re-learn how to play this entire game after every move". See this StackOverflow answer.
Second, it uses Experience Replay.
We store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far.
|
[
"## Keras Implementation of Deep Deterministic Policy Gradient ⏱ \nThis repo contains the model and the notebook to this Keras example on Deep Deterministic Policy Gradient on pendulum.\n\nFull credits to: Hemant Singh\n\n!pendulum_gif",
"## Background Information \nDeep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions.\n\nIt combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces.\n\nThis tutorial closely follow this paper - Continuous control with deep reinforcement learning\n\nWe are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right.\n\nWhat make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2.\n\nJust like the Actor-Critic method, we have two networks:\n\nActor - It proposes an action given a state.\nCritic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action.\nDDPG uses two more techniques not present in the original DQN:\n\nFirst, it uses two Target networks.\n\nWhy? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable.\n\nConceptually, this is like saying, \"I have an idea of how to play this well, I'm going to try it out for a bit until I find something better\", as opposed to saying \"I'm going to re-learn how to play this entire game after every move\". See this StackOverflow answer.\n\nSecond, it uses Experience Replay.\n\nWe store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far."
] |
[
"TAGS\n#keras #reinforcement learning #cartpole #deep deterministic policy gradient #license-cc0-1.0 #region-us \n",
"## Keras Implementation of Deep Deterministic Policy Gradient ⏱ \nThis repo contains the model and the notebook to this Keras example on Deep Deterministic Policy Gradient on pendulum.\n\nFull credits to: Hemant Singh\n\n!pendulum_gif",
"## Background Information \nDeep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions.\n\nIt combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces.\n\nThis tutorial closely follow this paper - Continuous control with deep reinforcement learning\n\nWe are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right.\n\nWhat make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2.\n\nJust like the Actor-Critic method, we have two networks:\n\nActor - It proposes an action given a state.\nCritic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action.\nDDPG uses two more techniques not present in the original DQN:\n\nFirst, it uses two Target networks.\n\nWhy? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable.\n\nConceptually, this is like saying, \"I have an idea of how to play this well, I'm going to try it out for a bit until I find something better\", as opposed to saying \"I'm going to re-learn how to play this entire game after every move\". See this StackOverflow answer.\n\nSecond, it uses Experience Replay.\n\nWe store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far."
] |
null |
keras
|
## Keras Implementation of Deep Dream 🦚🌌
This repo contains the model and the notebook [for this Deep Dream implementation of Keras](https://keras.io/examples/generative/deep_dream/).
Full credits to: [François Chollet](https://twitter.com/fchollet)

## Background Information
"Deep dream" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals.
It was first introduced by Alexander Mordvintsev from Google in July 2015.
Process:
- Load the original image.
- Define a number of processing scales ("octaves"), from smallest to largest.
- Resize the original image to the smallest scale.
- For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time
- Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image.
|
{"license": ["cc0-1.0"], "tags": ["gan", "generative adversarial networks", "deep dream"]}
|
keras-io/deep-dream
| null |
[
"keras",
"gan",
"generative adversarial networks",
"deep dream",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #gan #generative adversarial networks #deep dream #license-cc0-1.0 #has_space #region-us
|
## Keras Implementation of Deep Dream
This repo contains the model and the notebook for this Deep Dream implementation of Keras.
Full credits to: François Chollet
!deepdream
## Background Information
"Deep dream" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals.
It was first introduced by Alexander Mordvintsev from Google in July 2015.
Process:
- Load the original image.
- Define a number of processing scales ("octaves"), from smallest to largest.
- Resize the original image to the smallest scale.
- For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time
- Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image.
|
[
"## Keras Implementation of Deep Dream \n\nThis repo contains the model and the notebook for this Deep Dream implementation of Keras.\n\nFull credits to: François Chollet\n\n !deepdream",
"## Background Information\n\n \"Deep dream\" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals.\n\nIt was first introduced by Alexander Mordvintsev from Google in July 2015.\n\nProcess: \n\n- Load the original image.\n- Define a number of processing scales (\"octaves\"), from smallest to largest.\n- Resize the original image to the smallest scale.\n- For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time\n- Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image."
] |
[
"TAGS\n#keras #gan #generative adversarial networks #deep dream #license-cc0-1.0 #has_space #region-us \n",
"## Keras Implementation of Deep Dream \n\nThis repo contains the model and the notebook for this Deep Dream implementation of Keras.\n\nFull credits to: François Chollet\n\n !deepdream",
"## Background Information\n\n \"Deep dream\" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals.\n\nIt was first introduced by Alexander Mordvintsev from Google in July 2015.\n\nProcess: \n\n- Load the original image.\n- Define a number of processing scales (\"octaves\"), from smallest to largest.\n- Resize the original image to the smallest scale.\n- For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time\n- Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image."
] |
image-segmentation
|
keras
|
## Multiclass semantic segmentation using DeepLabV3+
This repo contains the model and the notebook [to this Keras example on Multiclass semantic segmentation using DeepLabV3+](https://keras.io/examples/vision/deeplabv3_plus/).
Full credits to: [Soumik Rakshit](http://github.com/soumik12345)
The model is trained for demonstrative purposes and does not guarantee the best results in production. For better results, follow & optimize the [Keras example]((https://keras.io/examples/vision/deeplabv3_plus/) as per your need.
## Background Information
Semantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks.
## Training Data
The model is trained on a subset (10,000 images) of [Crowd Instance-level Human Parsing Dataset](https://arxiv.org/abs/1811.12596). The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the "human part segmentation" task.
## Model
The model uses ResNet50 pretrained on ImageNet as the backbone model.
References:
1. [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/pdf/1802.02611.pdf)
2. [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
3. [DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs](https://arxiv.org/abs/1606.00915)
|
{"license": ["cc0-1.0"], "library_name": "keras", "tags": ["computer-vision", "image-segmentation"]}
|
keras-io/deeplabv3p-resnet50
| null |
[
"keras",
"computer-vision",
"image-segmentation",
"arxiv:1811.12596",
"arxiv:1802.02611",
"arxiv:1706.05587",
"arxiv:1606.00915",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1811.12596",
"1802.02611",
"1706.05587",
"1606.00915"
] |
[] |
TAGS
#keras #computer-vision #image-segmentation #arxiv-1811.12596 #arxiv-1802.02611 #arxiv-1706.05587 #arxiv-1606.00915 #license-cc0-1.0 #has_space #region-us
|
## Multiclass semantic segmentation using DeepLabV3+
This repo contains the model and the notebook to this Keras example on Multiclass semantic segmentation using DeepLabV3+.
Full credits to: Soumik Rakshit
The model is trained for demonstrative purposes and does not guarantee the best results in production. For better results, follow & optimize the Keras example as per your need.
## Background Information
Semantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks.
## Training Data
The model is trained on a subset (10,000 images) of Crowd Instance-level Human Parsing Dataset. The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the "human part segmentation" task.
## Model
The model uses ResNet50 pretrained on ImageNet as the backbone model.
References:
1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
2. Rethinking Atrous Convolution for Semantic Image Segmentation
3. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
|
[
"## Multiclass semantic segmentation using DeepLabV3+\nThis repo contains the model and the notebook to this Keras example on Multiclass semantic segmentation using DeepLabV3+.\n\nFull credits to: Soumik Rakshit\n\nThe model is trained for demonstrative purposes and does not guarantee the best results in production. For better results, follow & optimize the Keras example as per your need.",
"## Background Information \nSemantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks.",
"## Training Data\nThe model is trained on a subset (10,000 images) of Crowd Instance-level Human Parsing Dataset. The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the \"human part segmentation\" task.",
"## Model\nThe model uses ResNet50 pretrained on ImageNet as the backbone model.\n\nReferences: \n1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation \n2. Rethinking Atrous Convolution for Semantic Image Segmentation \n3. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs"
] |
[
"TAGS\n#keras #computer-vision #image-segmentation #arxiv-1811.12596 #arxiv-1802.02611 #arxiv-1706.05587 #arxiv-1606.00915 #license-cc0-1.0 #has_space #region-us \n",
"## Multiclass semantic segmentation using DeepLabV3+\nThis repo contains the model and the notebook to this Keras example on Multiclass semantic segmentation using DeepLabV3+.\n\nFull credits to: Soumik Rakshit\n\nThe model is trained for demonstrative purposes and does not guarantee the best results in production. For better results, follow & optimize the Keras example as per your need.",
"## Background Information \nSemantic segmentation, with the goal to assign semantic labels to every pixel in an image, is an essential computer vision task. In this example, we implement the DeepLabV3+ model for multi-class semantic segmentation, a fully-convolutional architecture that performs well on semantic segmentation benchmarks.",
"## Training Data\nThe model is trained on a subset (10,000 images) of Crowd Instance-level Human Parsing Dataset. The Crowd Instance-level Human Parsing (CIHP) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations for 20 categories, as well as instance-level identification. This dataset can be used for the \"human part segmentation\" task.",
"## Model\nThe model uses ResNet50 pretrained on ImageNet as the backbone model.\n\nReferences: \n1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation \n2. Rethinking Atrous Convolution for Semantic Image Segmentation \n3. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs"
] |
null |
keras
|
## Keras Implementation of Graph Attention Networks for Node Classification 🕸
This repo contains the model and the notebook [to this Keras example on Graph Attention Networks for Node Classification](https://keras.io/examples/graph/gat_node_classification/).
Full credits to: [Alexander Kensert](https://github.com/akensert)
## Background Information
Graph neural networks is the preferred neural network architecture for processing data structured as graphs (for example, social networks or molecule structures), yielding better results than fully-connected networks or convolutional networks.
This tutorial implements a specific graph neural network known as a [Graph Attention Network (GAT)](https://arxiv.org/abs/1710.10903) to predict labels of scientific papers based on the papers they cite (using the [Cora dataset](https://linqs.soe.ucsc.edu/data)).
References
For more information on GAT, see the original paper [Graph Attention Networks](https://arxiv.org/abs/1710.10903) as well as [DGL's Graph Attention Networks](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/9_gat.html) documentation.
|
{"license": ["cc0-1.0"], "tags": ["graph neural networks"], "thumbnail": "url to a thumbnail used in social sharing"}
|
keras-io/graph-attention-nets
| null |
[
"keras",
"graph neural networks",
"arxiv:1710.10903",
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1710.10903"
] |
[] |
TAGS
#keras #graph neural networks #arxiv-1710.10903 #license-cc0-1.0 #region-us
|
## Keras Implementation of Graph Attention Networks for Node Classification
This repo contains the model and the notebook to this Keras example on Graph Attention Networks for Node Classification.
Full credits to: Alexander Kensert
## Background Information
Graph neural networks is the preferred neural network architecture for processing data structured as graphs (for example, social networks or molecule structures), yielding better results than fully-connected networks or convolutional networks.
This tutorial implements a specific graph neural network known as a Graph Attention Network (GAT) to predict labels of scientific papers based on the papers they cite (using the Cora dataset).
References
For more information on GAT, see the original paper Graph Attention Networks as well as DGL's Graph Attention Networks documentation.
|
[
"## Keras Implementation of Graph Attention Networks for Node Classification \n\nThis repo contains the model and the notebook to this Keras example on Graph Attention Networks for Node Classification.\n\nFull credits to: Alexander Kensert",
"## Background Information \nGraph neural networks is the preferred neural network architecture for processing data structured as graphs (for example, social networks or molecule structures), yielding better results than fully-connected networks or convolutional networks.\n\nThis tutorial implements a specific graph neural network known as a Graph Attention Network (GAT) to predict labels of scientific papers based on the papers they cite (using the Cora dataset).\n\nReferences\nFor more information on GAT, see the original paper Graph Attention Networks as well as DGL's Graph Attention Networks documentation."
] |
[
"TAGS\n#keras #graph neural networks #arxiv-1710.10903 #license-cc0-1.0 #region-us \n",
"## Keras Implementation of Graph Attention Networks for Node Classification \n\nThis repo contains the model and the notebook to this Keras example on Graph Attention Networks for Node Classification.\n\nFull credits to: Alexander Kensert",
"## Background Information \nGraph neural networks is the preferred neural network architecture for processing data structured as graphs (for example, social networks or molecule structures), yielding better results than fully-connected networks or convolutional networks.\n\nThis tutorial implements a specific graph neural network known as a Graph Attention Network (GAT) to predict labels of scientific papers based on the papers they cite (using the Cora dataset).\n\nReferences\nFor more information on GAT, see the original paper Graph Attention Networks as well as DGL's Graph Attention Networks documentation."
] |
image-to-text
|
generic
|
## Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network. 🌃🌅🎑
This repo contains the models and the notebook [on Image captioning with visual attention](https://www.tensorflow.org/tutorials/text/image_captioning?hl=en).
Full credits to TensorFlow Team
## Background Information
This notebook implements TensorFlow Keras implementation on Image captioning with visual attention.
Given an image like the example below, your goal is to generate a caption such as "a surfer riding on a wave".

To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.

The model architecture is similar to [Show, Attend and Tell: Neural Image Caption Generation with Visual Attention](https://arxiv.org/abs/1502.03044).
This notebook is an end-to-end example. When you run the notebook, it downloads the [MS-COCO](https://cocodataset.org/#home) dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.
|
{"license": "cc0-1.0", "library_name": "generic", "tags": ["image-to-text", "generic"], "pipeline_tag": "image-to-text", "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-1.jpg", "example_title": "Kedis"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg", "example_title": "Cat in a Crate"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-3.jpg", "example_title": "Two Cats Chilling"}]}
|
keras-io/image-captioning
| null |
[
"generic",
"keras",
"image-to-text",
"arxiv:1502.03044",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1502.03044"
] |
[] |
TAGS
#generic #keras #image-to-text #arxiv-1502.03044 #license-cc0-1.0 #has_space #region-us
|
## Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network.
This repo contains the models and the notebook on Image captioning with visual attention.
Full credits to TensorFlow Team
## Background Information
This notebook implements TensorFlow Keras implementation on Image captioning with visual attention.
Given an image like the example below, your goal is to generate a caption such as "a surfer riding on a wave".
!image
To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.
!attention
The model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.
This notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.
|
[
"## Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network. \n\nThis repo contains the models and the notebook on Image captioning with visual attention.\n\nFull credits to TensorFlow Team",
"## Background Information\nThis notebook implements TensorFlow Keras implementation on Image captioning with visual attention.\nGiven an image like the example below, your goal is to generate a caption such as \"a surfer riding on a wave\".\n!image\nTo accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.\n!attention\nThe model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.\n\nThis notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model."
] |
[
"TAGS\n#generic #keras #image-to-text #arxiv-1502.03044 #license-cc0-1.0 #has_space #region-us \n",
"## Tensorflow Keras Implementation of an Image Captioning Model with encoder-decoder network. \n\nThis repo contains the models and the notebook on Image captioning with visual attention.\n\nFull credits to TensorFlow Team",
"## Background Information\nThis notebook implements TensorFlow Keras implementation on Image captioning with visual attention.\nGiven an image like the example below, your goal is to generate a caption such as \"a surfer riding on a wave\".\n!image\nTo accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.\n!attention\nThe model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.\n\nThis notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model."
] |
null |
keras
|
[Paper](https://arxiv.org/abs/2103.06255) | [Keras Tutorial](https://keras.io/examples/vision/involution/)
Author: [Aritra Roy Gosthipaty](https://twitter.com/ariG23498)
## Convolution Kernel

## Involution Kernel

|
{"license": "mit", "datasets": ["CIFAR10"]}
|
keras-io/involution
| null |
[
"keras",
"dataset:CIFAR10",
"arxiv:2103.06255",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.06255"
] |
[] |
TAGS
#keras #dataset-CIFAR10 #arxiv-2103.06255 #license-mit #has_space #region-us
|
Paper | Keras Tutorial
Author: Aritra Roy Gosthipaty
## Convolution Kernel
!conv
## Involution Kernel
!inv
|
[
"## Convolution Kernel\r\n!conv",
"## Involution Kernel\r\n!inv"
] |
[
"TAGS\n#keras #dataset-CIFAR10 #arxiv-2103.06255 #license-mit #has_space #region-us \n",
"## Convolution Kernel\r\n!conv",
"## Involution Kernel\r\n!inv"
] |
image-to-image
|
keras
|
## Zero-DCE for low-light image enhancement
**Original Author**: [Soumik Rakshit](https://github.com/soumik12345) <br>
**Date created**: 2021/09/18 <br>
**HF Contribution**: [Harveen Singh Chadha](https://github.com/harveenchadha)<br>
**Dataset**: [LOL Dataset](https://huggingface.co/Harveenchadha/low-light-image-enhancement/blob/main/lol_dataset.zip)
## [Spaces Demo](https://huggingface.co/spaces/Harveenchadha/low-light-image-enhancement)
## Description: Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement.
Zero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image enhancement as the task of estimating an image-specific tonal curve with a deep neural network. In this example, we train a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.
Zero-DCE takes a low-light image as input and produces high-order tonal curves as its output. These curves are then used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. The curve estimation process is done in such a way that it maintains the range of the enhanced image and preserves the contrast of neighboring pixels. This curve estimation is inspired by curves adjustment used in photo editing software such as Adobe Photoshop where users can adjust points throughout an image’s tonal range.
Zero-DCE is appealing because of its relaxed assumptions with regard to reference images: it does not require any input/output image pairs during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and guide the training of the network.
Sample Images:
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_0.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_1.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_2.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_3.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_4.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_5.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_6.png" >
<img src="https://keras.io/img/examples/vision/zero_dce/zero_dce_25_7.png" >
|
{"license": "apache-2.0", "library_name": "keras", "tags": ["image-to-image"]}
|
keras-io/low-light-image-enhancement
| null |
[
"keras",
"image-to-image",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #image-to-image #license-apache-2.0 #has_space #region-us
|
## Zero-DCE for low-light image enhancement
Original Author: Soumik Rakshit <br>
Date created: 2021/09/18 <br>
HF Contribution: Harveen Singh Chadha<br>
Dataset: LOL Dataset
## Spaces Demo
## Description: Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement.
Zero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image enhancement as the task of estimating an image-specific tonal curve with a deep neural network. In this example, we train a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.
Zero-DCE takes a low-light image as input and produces high-order tonal curves as its output. These curves are then used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. The curve estimation process is done in such a way that it maintains the range of the enhanced image and preserves the contrast of neighboring pixels. This curve estimation is inspired by curves adjustment used in photo editing software such as Adobe Photoshop where users can adjust points throughout an image’s tonal range.
Zero-DCE is appealing because of its relaxed assumptions with regard to reference images: it does not require any input/output image pairs during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and guide the training of the network.
Sample Images:
<img src="URL >
<img src="URL >
<img src="URL >
<img src="URL >
<img src="URL >
<img src="URL >
<img src="URL >
<img src="URL >
|
[
"## Zero-DCE for low-light image enhancement\n\n\nOriginal Author: Soumik Rakshit <br>\nDate created: 2021/09/18 <br>\nHF Contribution: Harveen Singh Chadha<br>\nDataset: LOL Dataset",
"## Spaces Demo",
"## Description: Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement.\n\n\nZero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image enhancement as the task of estimating an image-specific tonal curve with a deep neural network. In this example, we train a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.\n\nZero-DCE takes a low-light image as input and produces high-order tonal curves as its output. These curves are then used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. The curve estimation process is done in such a way that it maintains the range of the enhanced image and preserves the contrast of neighboring pixels. This curve estimation is inspired by curves adjustment used in photo editing software such as Adobe Photoshop where users can adjust points throughout an image’s tonal range.\n\nZero-DCE is appealing because of its relaxed assumptions with regard to reference images: it does not require any input/output image pairs during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and guide the training of the network.\n\n\nSample Images:\n\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >"
] |
[
"TAGS\n#keras #image-to-image #license-apache-2.0 #has_space #region-us \n",
"## Zero-DCE for low-light image enhancement\n\n\nOriginal Author: Soumik Rakshit <br>\nDate created: 2021/09/18 <br>\nHF Contribution: Harveen Singh Chadha<br>\nDataset: LOL Dataset",
"## Spaces Demo",
"## Description: Implementing Zero-Reference Deep Curve Estimation for low-light image enhancement.\n\n\nZero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image enhancement as the task of estimating an image-specific tonal curve with a deep neural network. In this example, we train a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.\n\nZero-DCE takes a low-light image as input and produces high-order tonal curves as its output. These curves are then used for pixel-wise adjustment on the dynamic range of the input to obtain an enhanced image. The curve estimation process is done in such a way that it maintains the range of the enhanced image and preserves the contrast of neighboring pixels. This curve estimation is inspired by curves adjustment used in photo editing software such as Adobe Photoshop where users can adjust points throughout an image’s tonal range.\n\nZero-DCE is appealing because of its relaxed assumptions with regard to reference images: it does not require any input/output image pairs during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and guide the training of the network.\n\n\nSample Images:\n\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >\n<img src=\"URL >"
] |
image-to-image
|
keras
|
## Model description
This repo contains the model and the notebook [Low-light image enhancement using MIRNet](https://keras.io/examples/vision/mirnet/).
Full credits go to [Soumik Rakshit](https://github.com/soumik12345)
Reproduced by [Vu Minh Chien](https://www.linkedin.com/in/vumichien/) with a slight change on hyperparameters.
With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as photography, security, medical imaging, and remote sensing. The MIRNet model for low-light image enhancement is a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details
## Dataset
The [LoL Dataset](https://drive.google.com/uc?id=1DdGIJ4PZPlF2ikl8mNM9V-PdVxVLbQi6) has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: ReduceLROnPlateau
- num_epochs: 50
### Training results
- The results are shown in TensorBoard (Training metrics).
### View Model Demo

<details>
<summary> View Model Plot </summary>

</details>
|
{"library_name": "keras", "tags": ["image-to-image"]}
|
keras-io/lowlight-enhance-mirnet
| null |
[
"keras",
"tensorboard",
"image-to-image",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #tensorboard #image-to-image #has_space #region-us
|
## Model description
This repo contains the model and the notebook Low-light image enhancement using MIRNet.
Full credits go to Soumik Rakshit
Reproduced by Vu Minh Chien with a slight change on hyperparameters.
With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as photography, security, medical imaging, and remote sensing. The MIRNet model for low-light image enhancement is a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details
## Dataset
The LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: ReduceLROnPlateau
- num_epochs: 50
### Training results
- The results are shown in TensorBoard (Training metrics).
### View Model Demo
!Model Demo
<details>
<summary> View Model Plot </summary>
!Model Image
</details>
|
[
"## Model description\nThis repo contains the model and the notebook Low-light image enhancement using MIRNet.\n\nFull credits go to Soumik Rakshit\n\nReproduced by Vu Minh Chien with a slight change on hyperparameters.\n\nWith the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as photography, security, medical imaging, and remote sensing. The MIRNet model for low-light image enhancement is a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details",
"## Dataset\nThe LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image.",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-04\n- train_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: ReduceLROnPlateau\n- num_epochs: 50",
"### Training results\n\n- The results are shown in TensorBoard (Training metrics).",
"### View Model Demo \n\n!Model Demo\n \n\n<details>\n\n<summary> View Model Plot </summary>\n\n !Model Image\n \n</details>"
] |
[
"TAGS\n#keras #tensorboard #image-to-image #has_space #region-us \n",
"## Model description\nThis repo contains the model and the notebook Low-light image enhancement using MIRNet.\n\nFull credits go to Soumik Rakshit\n\nReproduced by Vu Minh Chien with a slight change on hyperparameters.\n\nWith the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as photography, security, medical imaging, and remote sensing. The MIRNet model for low-light image enhancement is a fully-convolutional architecture that learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details",
"## Dataset\nThe LoL Dataset has been created for low-light image enhancement. It provides 485 images for training and 15 for testing. Each image pair in the dataset consists of a low-light input image and its corresponding well-exposed reference image.",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-04\n- train_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: ReduceLROnPlateau\n- num_epochs: 50",
"### Training results\n\n- The results are shown in TensorBoard (Training metrics).",
"### View Model Demo \n\n!Model Demo\n \n\n<details>\n\n<summary> View Model Plot </summary>\n\n !Model Image\n \n</details>"
] |
image-classification
|
keras
|
## Image Classification using MobileViT
This repo contains the model and the notebook [to this Keras example on MobileViT](https://keras.io/examples/vision/mobilevit/).
Full credits to: [Sayak Paul](https://twitter.com/RisingSayak)
## Background Information
MobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality.
Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.
## Training Data
The model is trained on a [tf_flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers)
|
{"license": ["cc0-1.0"], "library_name": "keras", "tags": ["computer-vision", "image-classification"]}
|
keras-io/mobile-vit-xxs
| null |
[
"keras",
"computer-vision",
"image-classification",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #computer-vision #image-classification #license-cc0-1.0 #has_space #region-us
|
## Image Classification using MobileViT
This repo contains the model and the notebook to this Keras example on MobileViT.
Full credits to: Sayak Paul
## Background Information
MobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality.
Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.
## Training Data
The model is trained on a tf_flowers dataset
|
[
"## Image Classification using MobileViT\nThis repo contains the model and the notebook to this Keras example on MobileViT.\n\nFull credits to: Sayak Paul",
"## Background Information \nMobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality.\n\nBesides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.",
"## Training Data\nThe model is trained on a tf_flowers dataset"
] |
[
"TAGS\n#keras #computer-vision #image-classification #license-cc0-1.0 #has_space #region-us \n",
"## Image Classification using MobileViT\nThis repo contains the model and the notebook to this Keras example on MobileViT.\n\nFull credits to: Sayak Paul",
"## Background Information \nMobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality.\n\nBesides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.",
"## Training Data\nThe model is trained on a tf_flowers dataset"
] |
image-segmentation
|
keras
|
## Model description
The original idea from Keras examples [Monocular depth estimation](https://keras.io/examples/vision/depth_estimation/) of author [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147/)
Full credits go to [Vu Minh Chien](https://www.linkedin.com/in/vumichien/)
Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or infer depth information, given only a single RGB image as input.
## Dataset
[NYU Depth Dataset V2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
## Training procedure
### Training hyperparameters
**Model architecture**:
- UNet with a pretrained DenseNet 201 backbone.
The following hyperparameters were used during training:
- learning_rate: 1e-04
- train_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: ReduceLROnPlateau
- num_epochs: 10
### Training results
| Epoch | Training loss | Validation Loss | Learning rate |
|:------:|:-------------:|:---------------:|:-------------:|
| 1 | 0.1333 | 0.1315 | 1e-04 |
| 2 | 0.0948 | 0.1232 | 1e-04 |
| 3 | 0.0834 | 0.1220 | 1e-04 |
| 4 | 0.0775 | 0.1213 | 1e-04 |
| 5 | 0.0736 | 0.1196 | 1e-04 |
| 6 | 0.0707 | 0.1205 | 1e-04 |
| 7 | 0.0687 | 0.1190 | 1e-04 |
| 8 | 0.0667 | 0.1177 | 1e-04 |
| 9 | 0.0654 | 0.1177 | 1e-04 |
| 10 | 0.0635 | 0.1182 | 9e-05 |
### View Model Demo

<details>
<summary> View Model Plot </summary>

</details>
|
{"library_name": "keras", "tags": ["image-segmentation"]}
|
keras-io/monocular-depth-estimation
| null |
[
"keras",
"tensorboard",
"image-segmentation",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #tensorboard #image-segmentation #has_space #region-us
|
Model description
-----------------
The original idea from Keras examples Monocular depth estimation of author Victor Basu
Full credits go to Vu Minh Chien
Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or infer depth information, given only a single RGB image as input.
Dataset
-------
NYU Depth Dataset V2 is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
Training procedure
------------------
### Training hyperparameters
Model architecture:
* UNet with a pretrained DenseNet 201 backbone.
The following hyperparameters were used during training:
* learning\_rate: 1e-04
* train\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: ReduceLROnPlateau
* num\_epochs: 10
### Training results
### View Model Demo
!Model Demo
View Model Plot
!Model Image
|
[
"### Training hyperparameters\n\n\nModel architecture:\n\n\n* UNet with a pretrained DenseNet 201 backbone.\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-04\n* train\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: ReduceLROnPlateau\n* num\\_epochs: 10",
"### Training results",
"### View Model Demo\n\n\n!Model Demo\n\n\n\n View Model Plot \n!Model Image"
] |
[
"TAGS\n#keras #tensorboard #image-segmentation #has_space #region-us \n",
"### Training hyperparameters\n\n\nModel architecture:\n\n\n* UNet with a pretrained DenseNet 201 backbone.\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-04\n* train\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: ReduceLROnPlateau\n* num\\_epochs: 10",
"### Training results",
"### View Model Demo\n\n\n!Model Demo\n\n\n\n View Model Plot \n!Model Image"
] |
null |
keras
|
## Tensorflow Keras Implementation of Multimodal entailment.
This repo contains the models [Multimodal Entailment](https://keras.io/examples/nlp/multimodal_entailment/#dataset-visualization).
Credits: [Sayak Paul](https://twitter.com/RisingSayak) - Original Author
HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)
## Background Information
### Introduction
In this example, we will build and train a model for predicting multimodal entailment. We will be using the [multimodal entailment dataset](https://github.com/google-research-datasets/recognizing-multimodal-entailment) recently introduced by Google Research.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities.
|
{"tags": ["multimodal-entailment", "generic"]}
|
keras-io/multimodal-entailment
| null |
[
"keras",
"multimodal-entailment",
"generic",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #multimodal-entailment #generic #has_space #region-us
|
## Tensorflow Keras Implementation of Multimodal entailment.
This repo contains the models Multimodal Entailment.
Credits: Sayak Paul - Original Author
HF Contribution: Rishav Chandra Varma
## Background Information
### Introduction
In this example, we will build and train a model for predicting multimodal entailment. We will be using the multimodal entailment dataset recently introduced by Google Research.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities.
|
[
"## Tensorflow Keras Implementation of Multimodal entailment.\n\nThis repo contains the models Multimodal Entailment.\n\nCredits: Sayak Paul - Original Author\n\nHF Contribution: Rishav Chandra Varma",
"## Background Information",
"### Introduction\n\nIn this example, we will build and train a model for predicting multimodal entailment. We will be using the multimodal entailment dataset recently introduced by Google Research.",
"### What is multimodal entailment?\n\nOn social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:\n\n Does a given piece of information contradict the other?\n Does a given piece of information imply the other?\n\nIn NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities."
] |
[
"TAGS\n#keras #multimodal-entailment #generic #has_space #region-us \n",
"## Tensorflow Keras Implementation of Multimodal entailment.\n\nThis repo contains the models Multimodal Entailment.\n\nCredits: Sayak Paul - Original Author\n\nHF Contribution: Rishav Chandra Varma",
"## Background Information",
"### Introduction\n\nIn this example, we will build and train a model for predicting multimodal entailment. We will be using the multimodal entailment dataset recently introduced by Google Research.",
"### What is multimodal entailment?\n\nOn social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:\n\n Does a given piece of information contradict the other?\n Does a given piece of information imply the other?\n\nIn NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities."
] |
null |
keras
|
## Tensorflow Keras Implementation of Named Entity Recognition using Transformers.
This repo contains code using the model. [Named Entity Recognition using Transformers](https://keras.io/examples/nlp/ner_transformers/).
Credits: [Varun Singh](https://www.linkedin.com/in/varunsingh2/) - Original Author
HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)
## Background Information
### Introduction
Named Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: "Person", "Location", "Organization", "Dates" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories.
We will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the [dataset website](https://www.clips.uantwerpen.be/conll2003/ner/). However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this [dataset](https://huggingface.co/datasets/conll2003).
|
{"tags": ["multimodal-entailment", "generic"]}
|
keras-io/ner-with-transformers
| null |
[
"keras",
"multimodal-entailment",
"generic",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #multimodal-entailment #generic #has_space #region-us
|
## Tensorflow Keras Implementation of Named Entity Recognition using Transformers.
This repo contains code using the model. Named Entity Recognition using Transformers.
Credits: Varun Singh - Original Author
HF Contribution: Rishav Chandra Varma
## Background Information
### Introduction
Named Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: "Person", "Location", "Organization", "Dates" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories.
We will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the dataset website. However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this dataset.
|
[
"## Tensorflow Keras Implementation of Named Entity Recognition using Transformers.\n\nThis repo contains code using the model. Named Entity Recognition using Transformers.\n\nCredits: Varun Singh - Original Author\n\nHF Contribution: Rishav Chandra Varma",
"## Background Information",
"### Introduction\n\nNamed Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: \"Person\", \"Location\", \"Organization\", \"Dates\" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories.\n\nWe will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the dataset website. However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this dataset."
] |
[
"TAGS\n#keras #multimodal-entailment #generic #has_space #region-us \n",
"## Tensorflow Keras Implementation of Named Entity Recognition using Transformers.\n\nThis repo contains code using the model. Named Entity Recognition using Transformers.\n\nCredits: Varun Singh - Original Author\n\nHF Contribution: Rishav Chandra Varma",
"## Background Information",
"### Introduction\n\nNamed Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: \"Person\", \"Location\", \"Organization\", \"Dates\" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories.\n\nWe will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the dataset website. However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this dataset."
] |
image-to-text
|
keras
|
## Keras Implementation of OCR model for reading captcha 🤖🦹🏻
This repo contains the model and the notebook [to this Keras example on OCR model for reading captcha](https://keras.io/examples/vision/captcha_ocr/).
Full credits to: [Aakash Kumar Nain](https://twitter.com/A_K_Nain)
## Background Information
This example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an "Endpoint layer" for implementing CTC loss.
This model uses subclassing, learn more about subclassing from [this guide](https://keras.io/guides/making_new_layers_and_models_via_subclassing/).

|
{"license": ["cc0-1.0"], "tags": ["ocr", "computer vision", "object detection", "image-to-text"]}
|
keras-io/ocr-for-captcha
| null |
[
"keras",
"ocr",
"computer vision",
"object detection",
"image-to-text",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #ocr #computer vision #object detection #image-to-text #license-cc0-1.0 #has_space #region-us
|
## Keras Implementation of OCR model for reading captcha
This repo contains the model and the notebook to this Keras example on OCR model for reading captcha.
Full credits to: Aakash Kumar Nain
## Background Information
This example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an "Endpoint layer" for implementing CTC loss.
This model uses subclassing, learn more about subclassing from this guide.
!ocr
|
[
"## Keras Implementation of OCR model for reading captcha \n\nThis repo contains the model and the notebook to this Keras example on OCR model for reading captcha.\n\nFull credits to: Aakash Kumar Nain",
"## Background Information \nThis example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an \"Endpoint layer\" for implementing CTC loss. \nThis model uses subclassing, learn more about subclassing from this guide.\n!ocr"
] |
[
"TAGS\n#keras #ocr #computer vision #object detection #image-to-text #license-cc0-1.0 #has_space #region-us \n",
"## Keras Implementation of OCR model for reading captcha \n\nThis repo contains the model and the notebook to this Keras example on OCR model for reading captcha.\n\nFull credits to: Aakash Kumar Nain",
"## Background Information \nThis example demonstrates a simple OCR model built with the Functional API. Apart from combining CNN and RNN, it also illustrates how you can instantiate a new layer and use it as an \"Endpoint layer\" for implementing CTC loss. \nThis model uses subclassing, learn more about subclassing from this guide.\n!ocr"
] |
null |
keras
|
## Keras Implementation of PixelCNN on MNIST 🔢
This repo contains the model [PixelCNN](https://keras.io/examples/generative/pixelcnn/).
Sample images generated:
<img src="https://i.ibb.co/RDWbJBM/image.png" width="120" height='120'> <img src="https://i.ibb.co/kGPTDDb/104c083f-68e4-4d10-8b37-a242a7f10dd6.png" width="120" height='120'> <img src="https://i.ibb.co/9Wqqhyc/indir-5.png" width="120" height='120'> <img src="https://i.ibb.co/x7yh5py/indir-6.png" width="120" height='120'>
Full credits to author: [ADMoreau](https://github.com/ADMoreau)
|
{"license": ["cc0-1.0"], "tags": ["convnet", "mnist", "generative"]}
|
keras-io/pixel-cnn-mnist
| null |
[
"keras",
"convnet",
"mnist",
"generative",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #convnet #mnist #generative #license-cc0-1.0 #has_space #region-us
|
## Keras Implementation of PixelCNN on MNIST
This repo contains the model PixelCNN.
Sample images generated:
<img src="https://i.URL width="120" height='120'> <img src="https://i.URL width="120" height='120'> <img src="https://i.URL width="120" height='120'> <img src="https://i.URL width="120" height='120'>
Full credits to author: ADMoreau
|
[
"## Keras Implementation of PixelCNN on MNIST \n\nThis repo contains the model PixelCNN.\n\nSample images generated:\n\n<img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'>\n\n\nFull credits to author: ADMoreau"
] |
[
"TAGS\n#keras #convnet #mnist #generative #license-cc0-1.0 #has_space #region-us \n",
"## Keras Implementation of PixelCNN on MNIST \n\nThis repo contains the model PixelCNN.\n\nSample images generated:\n\n<img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'> <img src=\"https://i.URL width=\"120\" height='120'>\n\n\nFull credits to author: ADMoreau"
] |
null |
keras
|
## Point cloud segmentation with PointNet
This repo contains [an Implementation of a PointNet-based model for segmenting point clouds.](https://keras.io/examples/vision/pointnet_segmentation/).
Full credits to [Soumik Rakshit](https://github.com/soumik12345), [Sayak Paul](https://github.com/sayakpaul)
## Background Information
A "point cloud" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
In this example, we demonstrate the implementation of the PointNet architecture for shape segmentation.
**References**
* [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593)
* [Point cloud classification with PointNet](https://keras.io/examples/vision/pointnet/)
* [Spatial Transformer Networks](https://arxiv.org/abs/1506.02025)


## Training Dataset
This model was trained on the [ShapeNet dataset](https://shapenet.org/).
The ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models.
**Prediction example**

|
{"license": "cc0-1.0", "tags": ["pointnet", "segmentation", "3d", "image"]}
|
keras-io/pointnet_segmentation
| null |
[
"keras",
"pointnet",
"segmentation",
"3d",
"image",
"arxiv:1612.00593",
"arxiv:1506.02025",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1612.00593",
"1506.02025"
] |
[] |
TAGS
#keras #pointnet #segmentation #3d #image #arxiv-1612.00593 #arxiv-1506.02025 #license-cc0-1.0 #has_space #region-us
|
## Point cloud segmentation with PointNet
This repo contains an Implementation of a PointNet-based model for segmenting point clouds..
Full credits to Soumik Rakshit, Sayak Paul
## Background Information
A "point cloud" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
In this example, we demonstrate the implementation of the PointNet architecture for shape segmentation.
References
* PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
* Point cloud classification with PointNet
* Spatial Transformer Networks
!preview
!preview
## Training Dataset
This model was trained on the ShapeNet dataset.
The ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models.
Prediction example
!result
|
[
"## Point cloud segmentation with PointNet \n\nThis repo contains an Implementation of a PointNet-based model for segmenting point clouds..\n\nFull credits to Soumik Rakshit, Sayak Paul",
"## Background Information\nA \"point cloud\" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.\n\nIn this example, we demonstrate the implementation of the PointNet architecture for shape segmentation.\n\nReferences\n* PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation\n* Point cloud classification with PointNet\n* Spatial Transformer Networks\n!preview\n!preview",
"## Training Dataset\nThis model was trained on the ShapeNet dataset.\n\nThe ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models.\n\nPrediction example\n!result"
] |
[
"TAGS\n#keras #pointnet #segmentation #3d #image #arxiv-1612.00593 #arxiv-1506.02025 #license-cc0-1.0 #has_space #region-us \n",
"## Point cloud segmentation with PointNet \n\nThis repo contains an Implementation of a PointNet-based model for segmenting point clouds..\n\nFull credits to Soumik Rakshit, Sayak Paul",
"## Background Information\nA \"point cloud\" is an important type of data structure for storing geometric shape data. Due to its irregular format, it's often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step which makes the data unnecessarily large. The PointNet family of models solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. The PointNet family of models provides a simple, unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.\n\nIn this example, we demonstrate the implementation of the PointNet architecture for shape segmentation.\n\nReferences\n* PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation\n* Point cloud classification with PointNet\n* Spatial Transformer Networks\n!preview\n!preview",
"## Training Dataset\nThis model was trained on the ShapeNet dataset.\n\nThe ShapeNet dataset is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNetCore is a subset of the full ShapeNet dataset with clean single 3D models and manually verified category and alignment annotations. It covers 55 common object categories, with about 51,300 unique 3D models.\n\nPrediction example\n!result"
] |
null |
keras
|
## Keras Implementation of Proximal Policy Optimization on Cartpole Environment 🔨🤖
This repo contains the model and the notebook [to this Keras example on PPO for Cartpole](https://keras.io/examples/rl/ppo_cartpole/).
Full credits to: Ilias Chrysovergis

## Background Information
### CartPole-v0
A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200.
### Proximal Policy Optimization
PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved.
|
{"license": ["cc0-1.0"], "tags": ["reinforcement learning", "proximal policy optimization"]}
|
keras-io/ppo-cartpole
| null |
[
"keras",
"reinforcement learning",
"proximal policy optimization",
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #reinforcement learning #proximal policy optimization #license-cc0-1.0 #region-us
|
## Keras Implementation of Proximal Policy Optimization on Cartpole Environment
This repo contains the model and the notebook to this Keras example on PPO for Cartpole.
Full credits to: Ilias Chrysovergis
!cartpole_gif
## Background Information
### CartPole-v0
A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200.
### Proximal Policy Optimization
PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved.
|
[
"## Keras Implementation of Proximal Policy Optimization on Cartpole Environment \n\nThis repo contains the model and the notebook to this Keras example on PPO for Cartpole.\n\nFull credits to: Ilias Chrysovergis \n\n!cartpole_gif",
"## Background Information",
"### CartPole-v0\nA pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200.",
"### Proximal Policy Optimization\nPPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved."
] |
[
"TAGS\n#keras #reinforcement learning #proximal policy optimization #license-cc0-1.0 #region-us \n",
"## Keras Implementation of Proximal Policy Optimization on Cartpole Environment \n\nThis repo contains the model and the notebook to this Keras example on PPO for Cartpole.\n\nFull credits to: Ilias Chrysovergis \n\n!cartpole_gif",
"## Background Information",
"### CartPole-v0\nA pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200.",
"### Proximal Policy Optimization\nPPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved."
] |
null |
keras
|
## RandAugment for Image Classification for Improved Robustness on the 🤗Hub!
[Paper](https://arxiv.org/abs/1909.13719) | [Keras Tutorial](https://keras.io/examples/vision/randaugment/)
Keras Tutorial Credit goes to : [Sayak Paul](https://twitter.com/RisingSayak)
**Excerpt from the Tutorial:**
Data augmentation is a very useful technique that can help to improve the translational invariance of convolutional neural networks (CNN). RandAugment is a stochastic vision data augmentation routine composed of strong augmentation transforms like color jitters, Gaussian blurs, saturations, etc. along with more traditional augmentation transforms such as random crops.
Recently, it has been a key component of works like [Noisy Student Training](https://arxiv.org/abs/1911.04252) and [Unsupervised Data Augmentation for Consistency Training](https://arxiv.org/abs/1904.12848). It has been also central to the success of EfficientNets.
## About The dataset
The model was trained on [**CIFAR-10**](https://huggingface.co/datasets/cifar10), consisting of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
|
{"license": "apache-2.0", "tags": ["RandAugment", "Image Classification"], "datasets": ["cifar10"], "metrics": ["Accuracy"]}
|
keras-io/randaugment
| null |
[
"keras",
"RandAugment",
"Image Classification",
"dataset:cifar10",
"arxiv:1909.13719",
"arxiv:1911.04252",
"arxiv:1904.12848",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.13719",
"1911.04252",
"1904.12848"
] |
[] |
TAGS
#keras #RandAugment #Image Classification #dataset-cifar10 #arxiv-1909.13719 #arxiv-1911.04252 #arxiv-1904.12848 #license-apache-2.0 #has_space #region-us
|
## RandAugment for Image Classification for Improved Robustness on the Hub!
Paper | Keras Tutorial
Keras Tutorial Credit goes to : Sayak Paul
Excerpt from the Tutorial:
Data augmentation is a very useful technique that can help to improve the translational invariance of convolutional neural networks (CNN). RandAugment is a stochastic vision data augmentation routine composed of strong augmentation transforms like color jitters, Gaussian blurs, saturations, etc. along with more traditional augmentation transforms such as random crops.
Recently, it has been a key component of works like Noisy Student Training and Unsupervised Data Augmentation for Consistency Training. It has been also central to the success of EfficientNets.
## About The dataset
The model was trained on CIFAR-10, consisting of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
|
[
"## RandAugment for Image Classification for Improved Robustness on the Hub!\n\nPaper | Keras Tutorial\n\nKeras Tutorial Credit goes to : Sayak Paul\n\nExcerpt from the Tutorial:\n\nData augmentation is a very useful technique that can help to improve the translational invariance of convolutional neural networks (CNN). RandAugment is a stochastic vision data augmentation routine composed of strong augmentation transforms like color jitters, Gaussian blurs, saturations, etc. along with more traditional augmentation transforms such as random crops.\n\nRecently, it has been a key component of works like Noisy Student Training and Unsupervised Data Augmentation for Consistency Training. It has been also central to the success of EfficientNets.",
"## About The dataset\n\nThe model was trained on CIFAR-10, consisting of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images."
] |
[
"TAGS\n#keras #RandAugment #Image Classification #dataset-cifar10 #arxiv-1909.13719 #arxiv-1911.04252 #arxiv-1904.12848 #license-apache-2.0 #has_space #region-us \n",
"## RandAugment for Image Classification for Improved Robustness on the Hub!\n\nPaper | Keras Tutorial\n\nKeras Tutorial Credit goes to : Sayak Paul\n\nExcerpt from the Tutorial:\n\nData augmentation is a very useful technique that can help to improve the translational invariance of convolutional neural networks (CNN). RandAugment is a stochastic vision data augmentation routine composed of strong augmentation transforms like color jitters, Gaussian blurs, saturations, etc. along with more traditional augmentation transforms such as random crops.\n\nRecently, it has been a key component of works like Noisy Student Training and Unsupervised Data Augmentation for Consistency Training. It has been also central to the success of EfficientNets.",
"## About The dataset\n\nThe model was trained on CIFAR-10, consisting of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images."
] |
image-segmentation
|
generic
|
## Keras semantic segmentation models on the 🤗Hub! 🐶 🐕 🐩
Full credits go to [François Chollet](https://twitter.com/fchollet).
This repository contains the model from [this notebook on segmenting pets using U-net-like architecture](https://keras.io/examples/vision/oxford_pets_image_segmentation/). We've changed the inference part to enable segmentation widget on the Hub. (see ```pipeline.py```)
## Background Information
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants, including, panoptic segmentation, instance segmentation and semantic segmentation.This post is on hosting your Keras semantic segmentation models on Hub.
Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.

We need to get the best prediction for every pixel.

This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in ```pipeline.py```.

Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!

|
{"license": "cc0-1.0", "library_name": "generic", "tags": ["image-segmentation", "generic"], "dataset": ["oxfort-iit pets"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-1.jpg", "example_title": "Kedis"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg", "example_title": "Cat in a Crate"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-3.jpg", "example_title": "Two Cats Chilling"}]}
|
keras-io/semantic-segmentation
| null |
[
"generic",
"tf",
"image-segmentation",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#generic #tf #image-segmentation #license-cc0-1.0 #has_space #region-us
|
## Keras semantic segmentation models on the Hub!
Full credits go to François Chollet.
This repository contains the model from this notebook on segmenting pets using U-net-like architecture. We've changed the inference part to enable segmentation widget on the Hub. (see )
## Background Information
Image classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants, including, panoptic segmentation, instance segmentation and semantic segmentation.This post is on hosting your Keras semantic segmentation models on Hub.
Semantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.
!Raw Output
We need to get the best prediction for every pixel.
!Mask
This is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in .
!Binary Mask
Now that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!
!Segmented Cat
|
[
"## Keras semantic segmentation models on the Hub! \nFull credits go to François Chollet.\n\nThis repository contains the model from this notebook on segmenting pets using U-net-like architecture. We've changed the inference part to enable segmentation widget on the Hub. (see )",
"## Background Information \n\nImage classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants, including, panoptic segmentation, instance segmentation and semantic segmentation.This post is on hosting your Keras semantic segmentation models on Hub.\nSemantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.\n!Raw Output\nWe need to get the best prediction for every pixel.\n!Mask\nThis is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in .\n!Binary Mask\nNow that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!\n!Segmented Cat"
] |
[
"TAGS\n#generic #tf #image-segmentation #license-cc0-1.0 #has_space #region-us \n",
"## Keras semantic segmentation models on the Hub! \nFull credits go to François Chollet.\n\nThis repository contains the model from this notebook on segmenting pets using U-net-like architecture. We've changed the inference part to enable segmentation widget on the Hub. (see )",
"## Background Information \n\nImage classification task tells us about a class assigned to an image, and object detection task creates a boundary box on an object in an image. But what if we want to know about the shape of the image? Segmentation models helps us segment images and reveal their shapes. It has many variants, including, panoptic segmentation, instance segmentation and semantic segmentation.This post is on hosting your Keras semantic segmentation models on Hub.\nSemantic segmentation models classify pixels, meaning, they assign a class (can be cat or dog) to each pixel. The output of a model looks like following.\n!Raw Output\nWe need to get the best prediction for every pixel.\n!Mask\nThis is still not readable. We have to convert this into different binary masks for each class and convert to a readable format by converting each mask into base64. We will return a list of dicts, and for each dictionary, we have the label itself, the base64 code and a score (semantic segmentation models don't return a score, so we have to return 1.0 for this case). You can find the full implementation in .\n!Binary Mask\nNow that you know the expected output by the model, you can host your Keras segmentation models (and other semantic segmentation models) in the similar fashion. Try it yourself and host your segmentation models!\n!Segmented Cat"
] |
image-classification
|
keras
|
# Semi-supervised image classification using contrastive pretraining with SimCLR
## Description
This is a simple image classification model trained with **Semi-supervised image classification using contrastive pretraining with SimCLR**
The training procedure was done as seen in the example on <a href='https://keras.io/examples/vision/semisupervised_simclr/' target='_blank'>**keras.io**</a> by András Béres.
The model was **trained on STL-10**, which includes ten classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.
## Metrics
There is a public W&B dashboard available <a href='https://wandb.ai/johko-cel/semi-supervised-contrastive-learning-simclr'>here</a> which illustrates the difference in different metrics such as accuracy of a baseline supervised trained model, a purely unsupervised model (pretrain) and the supervised finetuned model based on the unsupervised.
## Background
(by András Béres on <a href='https://keras.io/examples/vision/semisupervised_simclr/' target='_blank'>**keras.io**</a> )
Semi-supervised learning is a machine learning paradigm that deals with partially labeled datasets. When applying deep learning in the real world, one usually has to gather a large dataset to make it work well. However, while the cost of labeling scales linearly with the dataset size (labeling each example takes a constant time), model performance only scales sublinearly with it. This means that labeling more and more samples becomes less and less cost-efficient, while gathering unlabeled data is generally cheap, as it is usually readily available in large quantities.
Semi-supervised learning offers to solve this problem by only requiring a partially labeled dataset, and by being label-efficient by utilizing the unlabeled examples for learning as well.
In this example, I pretrained an encoder with contrastive learning on the STL-10 semi-supervised dataset using no labels at all, and then fine-tuned it using only its labeled subset.
|
{"license": "apache-2.0", "library_name": "keras", "tags": ["image-classification"], "datasets": ["STL-10"]}
|
keras-io/semi-supervised-classification-simclr
| null |
[
"keras",
"image-classification",
"dataset:STL-10",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #image-classification #dataset-STL-10 #license-apache-2.0 #has_space #region-us
|
# Semi-supervised image classification using contrastive pretraining with SimCLR
## Description
This is a simple image classification model trained with Semi-supervised image classification using contrastive pretraining with SimCLR
The training procedure was done as seen in the example on <a href='URL target='_blank'>URL</a> by András Béres.
The model was trained on STL-10, which includes ten classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.
## Metrics
There is a public W&B dashboard available <a href='URL which illustrates the difference in different metrics such as accuracy of a baseline supervised trained model, a purely unsupervised model (pretrain) and the supervised finetuned model based on the unsupervised.
## Background
(by András Béres on <a href='URL target='_blank'>URL</a> )
Semi-supervised learning is a machine learning paradigm that deals with partially labeled datasets. When applying deep learning in the real world, one usually has to gather a large dataset to make it work well. However, while the cost of labeling scales linearly with the dataset size (labeling each example takes a constant time), model performance only scales sublinearly with it. This means that labeling more and more samples becomes less and less cost-efficient, while gathering unlabeled data is generally cheap, as it is usually readily available in large quantities.
Semi-supervised learning offers to solve this problem by only requiring a partially labeled dataset, and by being label-efficient by utilizing the unlabeled examples for learning as well.
In this example, I pretrained an encoder with contrastive learning on the STL-10 semi-supervised dataset using no labels at all, and then fine-tuned it using only its labeled subset.
|
[
"# Semi-supervised image classification using contrastive pretraining with SimCLR",
"## Description\n\nThis is a simple image classification model trained with Semi-supervised image classification using contrastive pretraining with SimCLR\nThe training procedure was done as seen in the example on <a href='URL target='_blank'>URL</a> by András Béres.\n\nThe model was trained on STL-10, which includes ten classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.",
"## Metrics\nThere is a public W&B dashboard available <a href='URL which illustrates the difference in different metrics such as accuracy of a baseline supervised trained model, a purely unsupervised model (pretrain) and the supervised finetuned model based on the unsupervised.",
"## Background\n(by András Béres on <a href='URL target='_blank'>URL</a> )\nSemi-supervised learning is a machine learning paradigm that deals with partially labeled datasets. When applying deep learning in the real world, one usually has to gather a large dataset to make it work well. However, while the cost of labeling scales linearly with the dataset size (labeling each example takes a constant time), model performance only scales sublinearly with it. This means that labeling more and more samples becomes less and less cost-efficient, while gathering unlabeled data is generally cheap, as it is usually readily available in large quantities.\n\nSemi-supervised learning offers to solve this problem by only requiring a partially labeled dataset, and by being label-efficient by utilizing the unlabeled examples for learning as well.\n\nIn this example, I pretrained an encoder with contrastive learning on the STL-10 semi-supervised dataset using no labels at all, and then fine-tuned it using only its labeled subset."
] |
[
"TAGS\n#keras #image-classification #dataset-STL-10 #license-apache-2.0 #has_space #region-us \n",
"# Semi-supervised image classification using contrastive pretraining with SimCLR",
"## Description\n\nThis is a simple image classification model trained with Semi-supervised image classification using contrastive pretraining with SimCLR\nThe training procedure was done as seen in the example on <a href='URL target='_blank'>URL</a> by András Béres.\n\nThe model was trained on STL-10, which includes ten classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck.",
"## Metrics\nThere is a public W&B dashboard available <a href='URL which illustrates the difference in different metrics such as accuracy of a baseline supervised trained model, a purely unsupervised model (pretrain) and the supervised finetuned model based on the unsupervised.",
"## Background\n(by András Béres on <a href='URL target='_blank'>URL</a> )\nSemi-supervised learning is a machine learning paradigm that deals with partially labeled datasets. When applying deep learning in the real world, one usually has to gather a large dataset to make it work well. However, while the cost of labeling scales linearly with the dataset size (labeling each example takes a constant time), model performance only scales sublinearly with it. This means that labeling more and more samples becomes less and less cost-efficient, while gathering unlabeled data is generally cheap, as it is usually readily available in large quantities.\n\nSemi-supervised learning offers to solve this problem by only requiring a partially labeled dataset, and by being label-efficient by utilizing the unlabeled examples for learning as well.\n\nIn this example, I pretrained an encoder with contrastive learning on the STL-10 semi-supervised dataset using no labels at all, and then fine-tuned it using only its labeled subset."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# keras-io/sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6865
- Validation Loss: 0.7002
- Train Accuracy: 0.4908
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6865 | 0.6975 | 0.4908 | 0 |
| 0.6865 | 0.6973 | 0.4908 | 1 |
| 0.6865 | 0.6976 | 0.4908 | 2 |
| 0.6865 | 0.6975 | 0.4908 | 3 |
| 0.6865 | 0.7002 | 0.4908 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.2
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "keras-io/sentiment-analysis", "results": []}]}
|
keras-io/sentiment-analysis
| null |
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
keras-io/sentiment-analysis
===========================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.6865
* Validation Loss: 0.7002
* Train Accuracy: 0.4908
* Epoch: 4
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 1e-04, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.7.0
* Datasets 1.18.2
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
null |
keras
|
## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣
This repo contains the model and the notebook [on Simple MNIST convnet](https://keras.io/examples/vision/mnist_convnet/).
Full credits to: [François Chollet](https://github.com/fchollet)
|
{"license": ["cc0-1.0"], "tags": ["lstm"]}
|
keras-io/simple-mnist-convnet
| null |
[
"keras",
"lstm",
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #lstm #license-cc0-1.0 #region-us
|
## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣
This repo contains the model and the notebook on Simple MNIST convnet.
Full credits to: François Chollet
|
[
"## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣\nThis repo contains the model and the notebook on Simple MNIST convnet.\n\nFull credits to: François Chollet"
] |
[
"TAGS\n#keras #lstm #license-cc0-1.0 #region-us \n",
"## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣\nThis repo contains the model and the notebook on Simple MNIST convnet.\n\nFull credits to: François Chollet"
] |
image-to-image
|
keras
|
## Notes
* This model is a trained version of the Keras Tutorial [Image Super Resolution](https://keras.io/examples/vision/super_resolution_sub_pixel/)
* The model has been trained on inputs of dimension 100x100 and outputs images of 300x300.
[Link to a pyimagesearch](https://www.pyimagesearch.com/2021/09/27/pixel-shuffle-super-resolution-with-tensorflow-keras-and-deep-learning/) tutorial I worked on, where we have used Residual blocks along with the Efficient sub pixel net.
|
{"license": "mit", "tags": ["image-to-image"]}
|
keras-io/super-resolution
| null |
[
"keras",
"image-to-image",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #image-to-image #license-mit #has_space #region-us
|
## Notes
* This model is a trained version of the Keras Tutorial Image Super Resolution
* The model has been trained on inputs of dimension 100x100 and outputs images of 300x300.
Link to a pyimagesearch tutorial I worked on, where we have used Residual blocks along with the Efficient sub pixel net.
|
[
"## Notes\n* This model is a trained version of the Keras Tutorial Image Super Resolution \n* The model has been trained on inputs of dimension 100x100 and outputs images of 300x300.\n\n\nLink to a pyimagesearch tutorial I worked on, where we have used Residual blocks along with the Efficient sub pixel net."
] |
[
"TAGS\n#keras #image-to-image #license-mit #has_space #region-us \n",
"## Notes\n* This model is a trained version of the Keras Tutorial Image Super Resolution \n* The model has been trained on inputs of dimension 100x100 and outputs images of 300x300.\n\n\nLink to a pyimagesearch tutorial I worked on, where we have used Residual blocks along with the Efficient sub pixel net."
] |
image-classification
|
keras
|
A classification model trained with <a href='https://arxiv.org/abs/2004.11362' target='_blank'>**Supervised Contrastive Learning**</a> (Prannay Khosla et al.).
The training procedure was done as seen in the example on <a href='https://keras.io/examples/vision/supervised-contrastive-learning/' target='_blank'>**keras.io**</a> by Khalid Salama.
The model was **trained on cifar10**, which includes ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
The **test accuracy after 50 epochs** of the model with contrastive learning was **81.06%** (opposed to 79.88% without contrastive learning).
|
{"license": "apache-2.0", "library_name": "keras", "tags": ["image-classification"], "datasets": ["cifar10"]}
|
keras-io/supervised-contrastive-learning-cifar10
| null |
[
"keras",
"image-classification",
"dataset:cifar10",
"arxiv:2004.11362",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.11362"
] |
[] |
TAGS
#keras #image-classification #dataset-cifar10 #arxiv-2004.11362 #license-apache-2.0 #has_space #region-us
|
A classification model trained with <a href='URL target='_blank'>Supervised Contrastive Learning</a> (Prannay Khosla et al.).
The training procedure was done as seen in the example on <a href='URL target='_blank'>URL</a> by Khalid Salama.
The model was trained on cifar10, which includes ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
The test accuracy after 50 epochs of the model with contrastive learning was 81.06% (opposed to 79.88% without contrastive learning).
|
[] |
[
"TAGS\n#keras #image-classification #dataset-cifar10 #arxiv-2004.11362 #license-apache-2.0 #has_space #region-us \n"
] |
image-classification
|
keras
|
## Image classification with Swin Transformers on the 🤗Hub!
Author: [Kelvin Idanwekhai](https://twitter.com/KelvinIdan).
[Paper](https://arxiv.org/abs/2103.14030) | [Keras Tutorial](https://keras.io/examples/vision/swin_transformers/)
Excerpt from the Tutorial:
Swin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size.
## About The dataset
The dataset we are using here is called [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html). The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
|
{"license": "cc0-1.0", "library_name": "keras", "tags": ["swin-transformers", "Keras", "image-classification"], "dataset": ["CIFAR-100"]}
|
keras-io/swin-transformers
| null |
[
"keras",
"swin-transformers",
"Keras",
"image-classification",
"arxiv:2103.14030",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.14030"
] |
[] |
TAGS
#keras #swin-transformers #Keras #image-classification #arxiv-2103.14030 #license-cc0-1.0 #has_space #region-us
|
## Image classification with Swin Transformers on the Hub!
Author: Kelvin Idanwekhai.
Paper | Keras Tutorial
Excerpt from the Tutorial:
Swin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size.
## About The dataset
The dataset we are using here is called CIFAR-100. The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
|
[
"## Image classification with Swin Transformers on the Hub! \n\nAuthor: Kelvin Idanwekhai.\n\nPaper | Keras Tutorial\n\nExcerpt from the Tutorial:\n\nSwin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size.",
"## About The dataset\n\nThe dataset we are using here is called CIFAR-100. The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\n\nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class."
] |
[
"TAGS\n#keras #swin-transformers #Keras #image-classification #arxiv-2103.14030 #license-cc0-1.0 #has_space #region-us \n",
"## Image classification with Swin Transformers on the Hub! \n\nAuthor: Kelvin Idanwekhai.\n\nPaper | Keras Tutorial\n\nExcerpt from the Tutorial:\n\nSwin Transformer (Shifted Window Transformer) can serve as a general-purpose backbone for computer vision. Swin Transformer is a hierarchical Transformer whose representations are computed with shifted windows. The shifted window scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connections. This architecture has the flexibility to model information at various scales and has a linear computational complexity with respect to image size.",
"## About The dataset\n\nThe dataset we are using here is called CIFAR-100. The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\n\nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class."
] |
text-generation
|
keras
|
## Keras Implementation of Text generation with a miniature GPT
This repo contains the model and the notebook [to this Keras example on Text generation with a miniature GPT](https://keras.io/examples/generative/text_generation_with_miniature_gpt/).
Full credits to: [fchollet](https://twitter.com/fchollet)
## Background Information
This example demonstrates how to implement text generation with a miniature GPT model. The model consists of a single Transformer block with causal masking in its attention layer.
## Datasets
IMDB sentiment classification dataset for training. The model generates new movie reviews for a given prompt.
|
{"language": "en", "license": "gpl", "tags": ["gpt", "text-generation"], "widget": [{"text": "Once upon a time, "}]}
|
keras-io/text-generation-miniature-gpt
| null |
[
"keras",
"gpt2",
"gpt",
"text-generation",
"en",
"license:gpl",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#keras #gpt2 #gpt #text-generation #en #license-gpl #has_space #region-us
|
## Keras Implementation of Text generation with a miniature GPT
This repo contains the model and the notebook to this Keras example on Text generation with a miniature GPT.
Full credits to: fchollet
## Background Information
This example demonstrates how to implement text generation with a miniature GPT model. The model consists of a single Transformer block with causal masking in its attention layer.
## Datasets
IMDB sentiment classification dataset for training. The model generates new movie reviews for a given prompt.
|
[
"## Keras Implementation of Text generation with a miniature GPT\n\nThis repo contains the model and the notebook to this Keras example on Text generation with a miniature GPT.\n\nFull credits to: fchollet",
"## Background Information \nThis example demonstrates how to implement text generation with a miniature GPT model. The model consists of a single Transformer block with causal masking in its attention layer.",
"## Datasets\nIMDB sentiment classification dataset for training. The model generates new movie reviews for a given prompt."
] |
[
"TAGS\n#keras #gpt2 #gpt #text-generation #en #license-gpl #has_space #region-us \n",
"## Keras Implementation of Text generation with a miniature GPT\n\nThis repo contains the model and the notebook to this Keras example on Text generation with a miniature GPT.\n\nFull credits to: fchollet",
"## Background Information \nThis example demonstrates how to implement text generation with a miniature GPT model. The model consists of a single Transformer block with causal masking in its attention layer.",
"## Datasets\nIMDB sentiment classification dataset for training. The model generates new movie reviews for a given prompt."
] |
null |
keras
|
## Keras Implementation of time series anomaly detection using an Autoencoder ⌛
This repo contains the model and the notebook [for this time series anomaly detection implementation of Keras](https://keras.io/examples/timeseries/timeseries_anomaly_detection/).
Full credits to: [Pavithra Vijay](https://github.com/pavithrasv)
## Background Information
This notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data.
|
{"license": ["cc0-1.0"], "tags": ["autoencoder", "time series", "anomaly detection"]}
|
keras-io/time-series-anomaly-detection-autoencoder
| null |
[
"keras",
"autoencoder",
"time series",
"anomaly detection",
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #autoencoder #time series #anomaly detection #license-cc0-1.0 #region-us
|
## Keras Implementation of time series anomaly detection using an Autoencoder ⌛
This repo contains the model and the notebook for this time series anomaly detection implementation of Keras.
Full credits to: Pavithra Vijay
## Background Information
This notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data.
|
[
"## Keras Implementation of time series anomaly detection using an Autoencoder ⌛\n\nThis repo contains the model and the notebook for this time series anomaly detection implementation of Keras.\n\nFull credits to: Pavithra Vijay",
"## Background Information\nThis notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data."
] |
[
"TAGS\n#keras #autoencoder #time series #anomaly detection #license-cc0-1.0 #region-us \n",
"## Keras Implementation of time series anomaly detection using an Autoencoder ⌛\n\nThis repo contains the model and the notebook for this time series anomaly detection implementation of Keras.\n\nFull credits to: Pavithra Vijay",
"## Background Information\nThis notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data."
] |
null |
keras
|
## Timeseries classification with a Transformer model on the 🤗Hub!
Full credits go to [Theodoros Ntakouris](https://github.com/ntakouris).
This repository contains the model from [this notebook on time-series classification using the attention mechanism](https://keras.io/examples/timeseries/timeseries_classification_transformer/).
The dataset we are using here is called [FordA](http://www.j-wichard.de/publications/FordPaper.pdf). The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task.
|
{"license": "cc0-1.0", "library_name": "keras", "tags": ["time-series"], "dataset": ["FordA"]}
|
keras-io/timeseries_transformer_classification
| null |
[
"keras",
"time-series",
"license:cc0-1.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #time-series #license-cc0-1.0 #has_space #region-us
|
## Timeseries classification with a Transformer model on the Hub!
Full credits go to Theodoros Ntakouris.
This repository contains the model from this notebook on time-series classification using the attention mechanism.
The dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task.
|
[
"## Timeseries classification with a Transformer model on the Hub! \nFull credits go to Theodoros Ntakouris.\n\nThis repository contains the model from this notebook on time-series classification using the attention mechanism. \n\nThe dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task."
] |
[
"TAGS\n#keras #time-series #license-cc0-1.0 #has_space #region-us \n",
"## Timeseries classification with a Transformer model on the Hub! \nFull credits go to Theodoros Ntakouris.\n\nThis repository contains the model from this notebook on time-series classification using the attention mechanism. \n\nThe dataset we are using here is called FordA. The data comes from the UCR archive. The dataset contains 3601 training instances and another 1320 testing instances. Each timeseries corresponds to a measurement of engine noise captured by a motor sensor. For this task, the goal is to automatically detect the presence of a specific issue with the engine. The problem is a balanced binary classification task."
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Question Answering with Hugging Face Transformers and Keras 🤗❤️
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on SQuAD dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9300
- Validation Loss: 1.1437
- Epoch: 1
## Model description
Question answering model based on distilbert-base-cased, trained with 🤗Transformers + ❤️Keras.
## Intended uses & limitations
This model is trained for Question Answering tutorial for Keras.io.
## Training and evaluation data
It is trained on [SQuAD](https://huggingface.co/datasets/squad) question answering dataset. ⁉️
## Training procedure
Find the notebook in Keras Examples [here](https://keras.io/examples/nlp/question_answering/). ❤️
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5145 | 1.1500 | 0 |
| 0.9300 | 1.1437 | 1 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.6.0
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "datasets": ["squad"], "metrics": ["f1"], "widget": [{"context": "Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error."}], "base_model": "distilbert-base-cased", "model-index": [{"name": "transformers-qa", "results": []}]}
|
keras-io/transformers-qa
| null |
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"dataset:squad",
"base_model:distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #question-answering #generated_from_keras_callback #dataset-squad #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Question Answering with Hugging Face Transformers and Keras ️
=============================================================
This model is a fine-tuned version of distilbert-base-cased on SQuAD dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.9300
* Validation Loss: 1.1437
* Epoch: 1
Model description
-----------------
Question answering model based on distilbert-base-cased, trained with Transformers + ️Keras.
Intended uses & limitations
---------------------------
This model is trained for Question Answering tutorial for URL.
Training and evaluation data
----------------------------
It is trained on SQuAD question answering dataset. ⁉️
Training procedure
------------------
Find the notebook in Keras Examples here. ️
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: mixed\_float16
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* TensorFlow 2.6.0
* Datasets 1.16.2.dev0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* TensorFlow 2.6.0\n* Datasets 1.16.2.dev0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #dataset-squad #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: mixed\\_float16",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* TensorFlow 2.6.0\n* Datasets 1.16.2.dev0\n* Tokenizers 0.10.3"
] |
video-classification
|
keras
|
# 🎬 Video Classification with a CNN-RNN Architecture
**Author:** Sayak Paul
**Date created:** 2021/05/28
**Last modified:** 2021/06/05
**Description:** Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.
**Keras documentation [link](https://keras.io/examples/vision/video_classification/)**
This example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.
A video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.
```bash
Test video path: v_Punch_g03_c02.avi
Punch: 56.50%
TennisSwing: 29.97%
PlayingCello: 6.47%
ShavingBeard: 3.69%
CricketShot: 3.38%
```

|
{"library_name": "keras", "tags": ["computer-vision", "video-classification"]}
|
keras-io/video-classification-cnn-rnn
| null |
[
"keras",
"tensorboard",
"computer-vision",
"video-classification",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#keras #tensorboard #computer-vision #video-classification #has_space #region-us
|
# Video Classification with a CNN-RNN Architecture
Author: Sayak Paul
Date created: 2021/05/28
Last modified: 2021/06/05
Description: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.
Keras documentation link
This example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.
A video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.
!Example video from dataset
|
[
"# Video Classification with a CNN-RNN Architecture\n\nAuthor: Sayak Paul \nDate created: 2021/05/28 \nLast modified: 2021/06/05 \nDescription: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset. \nKeras documentation link \n\nThis example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.\n\nA video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.\n\n \n!Example video from dataset"
] |
[
"TAGS\n#keras #tensorboard #computer-vision #video-classification #has_space #region-us \n",
"# Video Classification with a CNN-RNN Architecture\n\nAuthor: Sayak Paul \nDate created: 2021/05/28 \nLast modified: 2021/06/05 \nDescription: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset. \nKeras documentation link \n\nThis example demonstrates video classification, an important use-case with applications in recommendations, security, and so on. We will be using the UCF101 dataset to build our video classifier. The dataset consists of videos categorized into different actions, like cricket shot, punching, biking, etc. This dataset is commonly used to build action recognizers, which are an application of video classification.\n\nA video consists of an ordered sequence of frames. Each frame contains spatial information, and the sequence of those frames contains temporal information. To model both of these aspects, we use a hybrid architecture that consists of convolutions (for spatial processing) as well as recurrent layers (for temporal processing). Specifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) consisting of GRU layers. This kind of hybrid architecture is popularly known as a CNN-RNN.\n\n \n!Example video from dataset"
] |
null |
keras
|
## Keras Implementation of Video Vision Transformer on medmnist
This repo contains the model [to this Keras example on Video Vision Transformer](https://keras.io/examples/vision/vivit/).
## Background Information
This example implements [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Arnab et al., a pure Transformer-based model for video classification. The authors propose a novel embedding scheme and a number of Transformer variants to model video clips.
## Datasets
We use the [MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification](https://medmnist.com/) dataset.
## Training Parameters
```
# DATA
DATASET_NAME = "organmnist3d"
BATCH_SIZE = 32
AUTO = tf.data.AUTOTUNE
INPUT_SHAPE = (28, 28, 28, 1)
NUM_CLASSES = 11
# OPTIMIZER
LEARNING_RATE = 1e-4
WEIGHT_DECAY = 1e-5
# TRAINING
EPOCHS = 80
# TUBELET EMBEDDING
PATCH_SIZE = (8, 8, 8)
NUM_PATCHES = (INPUT_SHAPE[0] // PATCH_SIZE[0]) ** 2
# ViViT ARCHITECTURE
LAYER_NORM_EPS = 1e-6
PROJECTION_DIM = 128
NUM_HEADS = 8
NUM_LAYERS = 8
```
|
{"license": "apache-2.0", "library_name": "keras", "title": "Video Vision Transformer on medmnist", "emoji": "\ud83e\uddd1\u200d\u2695\ufe0f", "colorFrom": "red", "colorTo": "green", "sdk": "gradio", "app_file": "app.py", "pinned": false}
|
keras-io/video-vision-transformer
| null |
[
"keras",
"arxiv:2103.15691",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.15691"
] |
[] |
TAGS
#keras #arxiv-2103.15691 #license-apache-2.0 #has_space #region-us
|
## Keras Implementation of Video Vision Transformer on medmnist
This repo contains the model to this Keras example on Video Vision Transformer.
## Background Information
This example implements ViViT: A Video Vision Transformer by Arnab et al., a pure Transformer-based model for video classification. The authors propose a novel embedding scheme and a number of Transformer variants to model video clips.
## Datasets
We use the MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification dataset.
## Training Parameters
|
[
"## Keras Implementation of Video Vision Transformer on medmnist\n\nThis repo contains the model to this Keras example on Video Vision Transformer.",
"## Background Information \nThis example implements ViViT: A Video Vision Transformer by Arnab et al., a pure Transformer-based model for video classification. The authors propose a novel embedding scheme and a number of Transformer variants to model video clips.",
"## Datasets\nWe use the MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification dataset.",
"## Training Parameters"
] |
[
"TAGS\n#keras #arxiv-2103.15691 #license-apache-2.0 #has_space #region-us \n",
"## Keras Implementation of Video Vision Transformer on medmnist\n\nThis repo contains the model to this Keras example on Video Vision Transformer.",
"## Background Information \nThis example implements ViViT: A Video Vision Transformer by Arnab et al., a pure Transformer-based model for video classification. The authors propose a novel embedding scheme and a number of Transformer variants to model video clips.",
"## Datasets\nWe use the MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification dataset.",
"## Training Parameters"
] |
image-classification
|
keras
|
# Train a Vision Transformer on small datasets
Author: [Aritra Roy Gosthipaty](https://twitter.com/ariG23498)
[Keras Blog](https://keras.io/examples/vision/vit_small_ds/) | [Colab Notebook](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/vit_small_ds.ipynb)
In the academic paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929), the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.
The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.
In the academic paper [Vision Transformer for Small-Size Datasets](https://arxiv.org/abs/2112.13492v1), the authors set out to tackle the problem of locality inductive bias in ViTs.
The main ideas are:
- Shifted Patch Tokenization
- Locality Self Attention
# Use the pre-trained model
The model is pre-trained on the CIFAR100 dataset with the following hyperparameters:
```python
# DATA
NUM_CLASSES = 100
INPUT_SHAPE = (32, 32, 3)
BUFFER_SIZE = 512
BATCH_SIZE = 256
# AUGMENTATION
IMAGE_SIZE = 72
PATCH_SIZE = 6
NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2
# OPTIMIZER
LEARNING_RATE = 0.001
WEIGHT_DECAY = 0.0001
# TRAINING
EPOCHS = 50
# ARCHITECTURE
LAYER_NORM_EPS = 1e-6
TRANSFORMER_LAYERS = 8
PROJECTION_DIM = 64
NUM_HEADS = 4
TRANSFORMER_UNITS = [
PROJECTION_DIM * 2,
PROJECTION_DIM,
]
MLP_HEAD_UNITS = [
2048,
1024
]
```
I have used the `AdamW` optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.
To use the pretrained model:
```python
loaded_model = from_pretrained_keras("keras-io/vit-small-ds")
_, accuracy, top_5_accuracy = loaded_model.evaluate(test_ds)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
```
For an indepth understanding of the model uploading and downloading process one can refer to this [colab notebook](https://colab.research.google.com/drive/1nCMhefqySzG2p8wyXhmeAX5urddQXt49?usp=sharing).
Important: The data augmentation pipeline is excluded from the model. TensorFlow `2.7` has a weird issue of serializaiton with augmentation pipeline. You can follow [this GitHub issue](https://github.com/huggingface/huggingface_hub/issues/593) for more updates. To send images through the model, one needs to make use of the `tf.data` and `map` API to map the augmentation.
|
{"license": "apache-2.0", "tags": ["image-classification", "keras"]}
|
keras-io/vit-small-ds
| null |
[
"keras",
"image-classification",
"arxiv:2010.11929",
"arxiv:2112.13492",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11929",
"2112.13492"
] |
[] |
TAGS
#keras #image-classification #arxiv-2010.11929 #arxiv-2112.13492 #license-apache-2.0 #region-us
|
# Train a Vision Transformer on small datasets
Author: Aritra Roy Gosthipaty
Keras Blog | Colab Notebook
In the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.
The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.
In the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.
The main ideas are:
- Shifted Patch Tokenization
- Locality Self Attention
# Use the pre-trained model
The model is pre-trained on the CIFAR100 dataset with the following hyperparameters:
I have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.
To use the pretrained model:
For an indepth understanding of the model uploading and downloading process one can refer to this colab notebook.
Important: The data augmentation pipeline is excluded from the model. TensorFlow '2.7' has a weird issue of serializaiton with augmentation pipeline. You can follow this GitHub issue for more updates. To send images through the model, one needs to make use of the 'URL' and 'map' API to map the augmentation.
|
[
"# Train a Vision Transformer on small datasets\n\nAuthor: Aritra Roy Gosthipaty\n\nKeras Blog | Colab Notebook\n\nIn the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.\n\nThe self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.\n\nIn the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.\n\nThe main ideas are:\n\n- Shifted Patch Tokenization\n- Locality Self Attention",
"# Use the pre-trained model\n\nThe model is pre-trained on the CIFAR100 dataset with the following hyperparameters:\n\nI have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.\n\nTo use the pretrained model:\n\n\nFor an indepth understanding of the model uploading and downloading process one can refer to this colab notebook.\n\nImportant: The data augmentation pipeline is excluded from the model. TensorFlow '2.7' has a weird issue of serializaiton with augmentation pipeline. You can follow this GitHub issue for more updates. To send images through the model, one needs to make use of the 'URL' and 'map' API to map the augmentation."
] |
[
"TAGS\n#keras #image-classification #arxiv-2010.11929 #arxiv-2112.13492 #license-apache-2.0 #region-us \n",
"# Train a Vision Transformer on small datasets\n\nAuthor: Aritra Roy Gosthipaty\n\nKeras Blog | Colab Notebook\n\nIn the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.\n\nThe self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.\n\nIn the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.\n\nThe main ideas are:\n\n- Shifted Patch Tokenization\n- Locality Self Attention",
"# Use the pre-trained model\n\nThe model is pre-trained on the CIFAR100 dataset with the following hyperparameters:\n\nI have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.\n\nTo use the pretrained model:\n\n\nFor an indepth understanding of the model uploading and downloading process one can refer to this colab notebook.\n\nImportant: The data augmentation pipeline is excluded from the model. TensorFlow '2.7' has a weird issue of serializaiton with augmentation pipeline. You can follow this GitHub issue for more updates. To send images through the model, one needs to make use of the 'URL' and 'map' API to map the augmentation."
] |
image-classification
|
keras
|
# Train a Vision Transformer on small datasets
Author: [Jónathan Heras](https://twitter.com/_Jonathan_Heras)
[Keras Blog](https://keras.io/examples/vision/vit_small_ds/) | [Colab Notebook](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/vit_small_ds.ipynb)
In the academic paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929), the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.
The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.
In the academic paper [Vision Transformer for Small-Size Datasets](https://arxiv.org/abs/2112.13492v1), the authors set out to tackle the problem of locality inductive bias in ViTs.
The main ideas are:
- Shifted Patch Tokenization
- Locality Self Attention
# Use the pre-trained model
The model is pre-trained on the CIFAR100 dataset with the following hyperparameters:
```python
# DATA
NUM_CLASSES = 100
INPUT_SHAPE = (32, 32, 3)
BUFFER_SIZE = 512
BATCH_SIZE = 256
# AUGMENTATION
IMAGE_SIZE = 72
PATCH_SIZE = 6
NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2
# OPTIMIZER
LEARNING_RATE = 0.001
WEIGHT_DECAY = 0.0001
# TRAINING
EPOCHS = 50
# ARCHITECTURE
LAYER_NORM_EPS = 1e-6
TRANSFORMER_LAYERS = 8
PROJECTION_DIM = 64
NUM_HEADS = 4
TRANSFORMER_UNITS = [
PROJECTION_DIM * 2,
PROJECTION_DIM,
]
MLP_HEAD_UNITS = [
2048,
1024
]
```
I have used the `AdamW` optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.
To use the pretrained model:
```python
loaded_model = from_pretrained_keras("keras-io/vit_small_ds_v2")
```
|
{"license": "apache-2.0", "tags": ["image-classification", "keras"]}
|
keras-io/vit_small_ds_v2
| null |
[
"keras",
"image-classification",
"arxiv:2010.11929",
"arxiv:2112.13492",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11929",
"2112.13492"
] |
[] |
TAGS
#keras #image-classification #arxiv-2010.11929 #arxiv-2112.13492 #license-apache-2.0 #has_space #region-us
|
# Train a Vision Transformer on small datasets
Author: Jónathan Heras
Keras Blog | Colab Notebook
In the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.
The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.
In the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.
The main ideas are:
- Shifted Patch Tokenization
- Locality Self Attention
# Use the pre-trained model
The model is pre-trained on the CIFAR100 dataset with the following hyperparameters:
I have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.
To use the pretrained model:
|
[
"# Train a Vision Transformer on small datasets\n\nAuthor: Jónathan Heras\n\nKeras Blog | Colab Notebook\n\nIn the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.\n\nThe self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.\n\nIn the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.\n\nThe main ideas are:\n\n- Shifted Patch Tokenization\n- Locality Self Attention",
"# Use the pre-trained model\n\nThe model is pre-trained on the CIFAR100 dataset with the following hyperparameters:\n\nI have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.\n\nTo use the pretrained model:"
] |
[
"TAGS\n#keras #image-classification #arxiv-2010.11929 #arxiv-2112.13492 #license-apache-2.0 #has_space #region-us \n",
"# Train a Vision Transformer on small datasets\n\nAuthor: Jónathan Heras\n\nKeras Blog | Colab Notebook\n\nIn the academic paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models.\n\nThe self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets.\n\nIn the academic paper Vision Transformer for Small-Size Datasets, the authors set out to tackle the problem of locality inductive bias in ViTs.\n\nThe main ideas are:\n\n- Shifted Patch Tokenization\n- Locality Self Attention",
"# Use the pre-trained model\n\nThe model is pre-trained on the CIFAR100 dataset with the following hyperparameters:\n\nI have used the 'AdamW' optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post.\n\nTo use the pretrained model:"
] |
fill-mask
|
transformers
|
### Overview
This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=52000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained("keshan/SinhalaBERTo")
tokenizer = AutoTokenizer.from_pretrained("keshan/SinhalaBERTo")
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask("මම ගෙදර <mask>.")
```
|
{"language": "si", "tags": ["SinhalaBERTo", "Sinhala", "roberta"], "datasets": ["oscar"]}
|
keshan/SinhalaBERTo
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"fill-mask",
"SinhalaBERTo",
"Sinhala",
"si",
"dataset:oscar",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"si"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #roberta #fill-mask #SinhalaBERTo #Sinhala #si #dataset-oscar #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### Overview
This is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is Roberta with the following specifications:
1. vocab_size=52000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
|
[
"### Overview\n\nThis is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is Roberta with the following specifications:\n 1. vocab_size=52000\n 2. max_position_embeddings=514\n 3. num_attention_heads=12\n 4. num_hidden_layers=6\n 5. type_vocab_size=1",
"## How to Use\nYou can use this model directly with a pipeline for masked language modeling:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #roberta #fill-mask #SinhalaBERTo #Sinhala #si #dataset-oscar #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Overview\n\nThis is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is Roberta with the following specifications:\n 1. vocab_size=52000\n 2. max_position_embeddings=514\n 3. num_attention_heads=12\n 4. num_hidden_layers=6\n 5. type_vocab_size=1",
"## How to Use\nYou can use this model directly with a pipeline for masked language modeling:"
] |
text-generation
|
transformers
|
This is a finetunes version of keshan/sinhala-gpt2 with newswire articles. This was finetuned on ~12MB of data
- Num examples=8395
- Batch size =8
It got a Perplexity of 3.15
|
{"language": "si", "tags": ["sinhala", "gpt2"], "pipeline_tag": "text-generation", "widget": [{"text": "\u0db8\u0db8"}]}
|
keshan/sinhala-gpt2-newswire
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"sinhala",
"si",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"si"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #sinhala #si #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
This is a finetunes version of keshan/sinhala-gpt2 with newswire articles. This was finetuned on ~12MB of data
- Num examples=8395
- Batch size =8
It got a Perplexity of 3.15
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #sinhala #si #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
### Overview
This is a smaller GPT2 model trained on [MC4](https://github.com/allenai/allennlp/discussions/5056) Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is GPT2 with the following specifications:
1. vocab_size=50257
2. n_embd=768
3. n_head=12
4. n_layer=12
5. n_positions=1024
## How to Use
You can use this model directly with a pipeline for casual language modeling:
```py
from transformers import pipeline
generator = pipeline('text-generation', model='keshan/sinhala-gpt2')
generator("මම", max_length=50, num_return_sequences=5)
```
|
{"language": "si", "tags": ["Sinhala", "text-generation", "gpt2"], "datasets": ["mc4"]}
|
keshan/sinhala-gpt2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"gpt2",
"feature-extraction",
"Sinhala",
"text-generation",
"si",
"dataset:mc4",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"si"
] |
TAGS
#transformers #pytorch #tf #jax #tensorboard #gpt2 #feature-extraction #Sinhala #text-generation #si #dataset-mc4 #endpoints_compatible #text-generation-inference #region-us
|
### Overview
This is a smaller GPT2 model trained on MC4 Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is GPT2 with the following specifications:
1. vocab_size=50257
2. n_embd=768
3. n_head=12
4. n_layer=12
5. n_positions=1024
## How to Use
You can use this model directly with a pipeline for casual language modeling:
|
[
"### Overview\n\nThis is a smaller GPT2 model trained on MC4 Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is GPT2 with the following specifications:\n 1. vocab_size=50257\n 2. n_embd=768\n 3. n_head=12\n 4. n_layer=12\n 5. n_positions=1024",
"## How to Use\nYou can use this model directly with a pipeline for casual language modeling:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #tensorboard #gpt2 #feature-extraction #Sinhala #text-generation #si #dataset-mc4 #endpoints_compatible #text-generation-inference #region-us \n",
"### Overview\n\nThis is a smaller GPT2 model trained on MC4 Sinhala dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is GPT2 with the following specifications:\n 1. vocab_size=50257\n 2. n_embd=768\n 3. n_head=12\n 4. n_layer=12\n 5. n_positions=1024",
"## How to Use\nYou can use this model directly with a pipeline for casual language modeling:"
] |
fill-mask
|
transformers
|
# Sinhala roberta on mc4 dataset
|
{"language": "si", "license": "cc-by-4.0", "tags": ["sinhala", "roberta"], "pipeline_tag": "fill-mask", "widget": [{"text": "\u0db8\u0db8 \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0db7\u0dcf\u0dc2\u0dcf\u0dc0 <mask>"}]}
|
keshan/sinhala-roberta-mc4
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"sinhala",
"si",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"si"
] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #sinhala #si #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Sinhala roberta on mc4 dataset
|
[
"# Sinhala roberta on mc4 dataset"
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #sinhala #si #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Sinhala roberta on mc4 dataset"
] |
fill-mask
|
transformers
|
### Overview
This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=50265
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=12
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained("keshan/sinhala-roberta-oscar")
tokenizer = AutoTokenizer.from_pretrained("keshan/sinhala-roberta-oscar")
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask("මම ගෙදර <mask>.")
```
|
{"language": "si", "tags": ["oscar", "Sinhala", "roberta", "fill-mask"], "datasets": ["oscar"], "widget": [{"text": "\u0db8\u0db8 \u0dc3\u0dd2\u0d82\u0dc4\u0dbd \u0db7\u0dcf\u0dc2\u0dcf\u0dc0 <mask>"}]}
|
keshan/sinhala-roberta-oscar
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"oscar",
"Sinhala",
"si",
"dataset:oscar",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"si"
] |
TAGS
#transformers #pytorch #jax #tensorboard #roberta #fill-mask #oscar #Sinhala #si #dataset-oscar #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
### Overview
This is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is Roberta with the following specifications:
1. vocab_size=50265
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=12
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
|
[
"### Overview\n\nThis is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is Roberta with the following specifications:\n 1. vocab_size=50265\n 2. max_position_embeddings=514\n 3. num_attention_heads=12\n 4. num_hidden_layers=12\n 5. type_vocab_size=1",
"## How to Use\nYou can use this model directly with a pipeline for masked language modeling:"
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #roberta #fill-mask #oscar #Sinhala #si #dataset-oscar #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Overview\n\nThis is a slightly smaller model trained on OSCAR Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.",
"## Model Specification\n\n\nThe model chosen for training is Roberta with the following specifications:\n 1. vocab_size=50265\n 2. max_position_embeddings=514\n 3. num_attention_heads=12\n 4. num_hidden_layers=12\n 5. type_vocab_size=1",
"## How to Use\nYou can use this model directly with a pipeline for masked language modeling:"
] |
null |
transformers
|
# kevinrobinson/perturbations_table_quickstart model card
This is just for UI smoke testing, and shouldn't be used for anything else.
It's built from https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/quickstart_sst_demo.py.
|
{}
|
kevinrobinson/perturbations_table_quickstart_sst
| null |
[
"transformers",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #endpoints_compatible #region-us
|
# kevinrobinson/perturbations_table_quickstart model card
This is just for UI smoke testing, and shouldn't be used for anything else.
It's built from URL
|
[
"# kevinrobinson/perturbations_table_quickstart model card\n\nThis is just for UI smoke testing, and shouldn't be used for anything else.\n\nIt's built from URL"
] |
[
"TAGS\n#transformers #tf #bert #endpoints_compatible #region-us \n",
"# kevinrobinson/perturbations_table_quickstart model card\n\nThis is just for UI smoke testing, and shouldn't be used for anything else.\n\nIt's built from URL"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-ext-finetuned-cola
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5747
- Matthews Correlation: 0.4085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.5824 | 1.0 | 66375 | 0.5746 | 0.4083 |
| 0.5824 | 2.0 | 66376 | 0.5747 | 0.4085 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "model-index": [{"name": "chinese-bert-wwm-ext-finetuned-cola", "results": []}]}
|
kevinzyz/chinese-bert-wwm-ext-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
chinese-bert-wwm-ext-finetuned-cola
===================================
This model is a fine-tuned version of hfl/chinese-bert-wwm-ext on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5747
* Matthews Correlation: 0.4085
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.7.1
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.7.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.7.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
image-to-text
|
transformers
|
# Manga OCR
Optical character recognition for Japanese text, with the main focus being Japanese manga.
It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) framework.
Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality
text recognition, robust against various scenarios specific to manga:
- both vertical and horizontal text
- text with furigana
- text overlaid on images
- wide variety of fonts and font styles
- low quality images
Code is available [here](https://github.com/kha-white/manga_ocr).
|
{"language": "ja", "license": "apache-2.0", "tags": ["image-to-text"], "datasets": ["manga109s"]}
|
kha-white/manga-ocr-base
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"ja",
"dataset:manga109s",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #vision-encoder-decoder #image-to-text #ja #dataset-manga109s #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Manga OCR
Optical character recognition for Japanese text, with the main focus being Japanese manga.
It uses Vision Encoder Decoder framework.
Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality
text recognition, robust against various scenarios specific to manga:
- both vertical and horizontal text
- text with furigana
- text overlaid on images
- wide variety of fonts and font styles
- low quality images
Code is available here.
|
[
"# Manga OCR\n\nOptical character recognition for Japanese text, with the main focus being Japanese manga.\n\nIt uses Vision Encoder Decoder framework.\n\nManga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality\ntext recognition, robust against various scenarios specific to manga:\n- both vertical and horizontal text\n- text with furigana\n- text overlaid on images\n- wide variety of fonts and font styles\n- low quality images\n\nCode is available here."
] |
[
"TAGS\n#transformers #pytorch #vision-encoder-decoder #image-to-text #ja #dataset-manga109s #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Manga OCR\n\nOptical character recognition for Japanese text, with the main focus being Japanese manga.\n\nIt uses Vision Encoder Decoder framework.\n\nManga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality\ntext recognition, robust against various scenarios specific to manga:\n- both vertical and horizontal text\n- text with furigana\n- text overlaid on images\n- wide variety of fonts and font styles\n- low quality images\n\nCode is available here."
] |
text-classification
|
transformers
|
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654)
## Intended uses & limitations
#### How to use the model
```python
premise = "The Movie have been criticized for the story. However, I think it is a great movie."
hypothesis = "I liked the movie."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1)
label_names = ["entailment", "neutral", "contradiction"]
print(label_names[prediction.argmax(0).tolist()])
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
train_args = TrainingArguments(
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
warmup_ratio=0.06,
weight_decay=0.1,
fp16=True,
seed=42,
)
```
### BibTeX entry and citation info
Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub.
|
{"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie."}]}
|
khalidalt/DeBERTa-v3-large-mnli
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03654"
] |
[
"en"
] |
TAGS
#transformers #pytorch #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2006.03654 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is DeBERTa-v3-large from Microsoft. The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on official repository and the paper
## Intended uses & limitations
#### How to use the model
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
### BibTeX entry and citation info
Please cite the DeBERTa paper and MultiNLI Dataset if you use this model and include this Huggingface hub.
|
[
"# DeBERTa-v3-large-mnli",
"## Model description\n\nThis model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information. \n\nThe model used is DeBERTa-v3-large from Microsoft. The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on official repository and the paper",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\n\nThis model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.",
"### Training procedure\n\nDeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.",
"### BibTeX entry and citation info\n\nPlease cite the DeBERTa paper and MultiNLI Dataset if you use this model and include this Huggingface hub."
] |
[
"TAGS\n#transformers #pytorch #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2006.03654 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# DeBERTa-v3-large-mnli",
"## Model description\n\nThis model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information. \n\nThe model used is DeBERTa-v3-large from Microsoft. The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on official repository and the paper",
"## Intended uses & limitations",
"#### How to use the model",
"### Training data\n\nThis model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.",
"### Training procedure\n\nDeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.",
"### BibTeX entry and citation info\n\nPlease cite the DeBERTa paper and MultiNLI Dataset if you use this model and include this Huggingface hub."
] |
text-generation
|
transformers
|
<!--
---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bengali-lyricist-gpt2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
-->
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bengali-lyricist-gpt2
This model is a fine-tuned version of [flax-community/gpt2-bengali](https://huggingface.co/flax-community/gpt2-bengali) on the [Bengali Song Lyrics](https://www.kaggle.com/shakirulhasan/bangla-song-lyrics) dataset from Kaggle.
It achieves the following results on the evaluation set:
- Loss: 2.1199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 284 | 2.0302 |
| 1.9991 | 2.0 | 568 | 2.0079 |
| 1.9991 | 3.0 | 852 | 1.9956 |
| 1.9135 | 4.0 | 1136 | 1.9885 |
| 1.9135 | 5.0 | 1420 | 1.9840 |
| 1.8561 | 6.0 | 1704 | 1.9831 |
| 1.8561 | 7.0 | 1988 | 1.9828 |
| 1.8094 | 8.0 | 2272 | 1.9827 |
| 1.7663 | 9.0 | 2556 | 1.9868 |
| 1.7663 | 10.0 | 2840 | 1.9902 |
| 1.7279 | 11.0 | 3124 | 1.9961 |
| 1.7279 | 12.0 | 3408 | 2.0023 |
| 1.6887 | 13.0 | 3692 | 2.0092 |
| 1.6887 | 14.0 | 3976 | 2.0162 |
| 1.6546 | 15.0 | 4260 | 2.0225 |
| 1.6217 | 16.0 | 4544 | 2.0315 |
| 1.6217 | 17.0 | 4828 | 2.0410 |
| 1.5953 | 18.0 | 5112 | 2.0474 |
| 1.5953 | 19.0 | 5396 | 2.0587 |
| 1.5648 | 20.0 | 5680 | 2.0679 |
| 1.5648 | 21.0 | 5964 | 2.0745 |
| 1.5413 | 22.0 | 6248 | 2.0836 |
| 1.5238 | 23.0 | 6532 | 2.0890 |
| 1.5238 | 24.0 | 6816 | 2.0969 |
| 1.5043 | 25.0 | 7100 | 2.1035 |
| 1.5043 | 26.0 | 7384 | 2.1091 |
| 1.4936 | 27.0 | 7668 | 2.1135 |
| 1.4936 | 28.0 | 7952 | 2.1172 |
| 1.4822 | 29.0 | 8236 | 2.1186 |
| 1.4783 | 30.0 | 8520 | 2.1199 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.9.1.dev0
- Tokenizers 0.10.3
|
{"language": "bn", "tags": ["text generation", "bengali", "gpt2", "bangla", "causal-lm"], "widget": [{"text": "\u099c\u09c0\u09ac\u09a8\u09c7\u09b0 \u09ae\u09be\u09a8\u09c7 "}], "pipeline_tag": "text-generation"}
|
khalidsaifullaah/bengali-lyricist-gpt2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"text generation",
"bengali",
"bangla",
"causal-lm",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #text generation #bengali #bangla #causal-lm #bn #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
bengali-lyricist-gpt2
=====================
This model is a fine-tuned version of flax-community/gpt2-bengali on the Bengali Song Lyrics dataset from Kaggle.
It achieves the following results on the evaluation set:
* Loss: 2.1199
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.9.0.dev0
* Pytorch 1.9.0+cu102
* Datasets 1.9.1.dev0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0.dev0\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.1.dev0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #text generation #bengali #bangla #causal-lm #bn #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0.dev0\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.1.dev0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
{"language": "en", "license": "mit", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
khanglam7012/t5-small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #dataset-WebNLG #dataset-Dart #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# keytotext
!keytotext (1)
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface

## UI:
UI: \nIdea is to build a model which will take keywords as inputs and generate sentences as outputs.",
"### Keytotext is powered by Huggingface \n",
"## UI:\nUI: \nIdea is to build a model which will take keywords as inputs and generate sentences as outputs.",
"### Keytotext is powered by Huggingface \n",
"## UI:\nUI:  is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
{"language": "id", "license": "mit", "tags": ["indogpt", "indobenchmark", "indonlg"], "datasets": ["Indo4B+"], "inference": false}
|
khavitidala/finetuned-indobartv2-id-su
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08200"
] |
[
"id"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #region-us
|
IndoBART-v2 Model fine-tuned version
====================================
Fine-tuned version of IndoBART-v2 with machine translation id->su using default hyperparameter from indoBART paper.
by Ryan Abdurohman
IndoBART-v2 Model
=================
IndoBART-v2 is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
All Pre-trained Models
----------------------
Model: 'indobenchmark/indobart-v2', #params: 132M, Training data: Indo4B-Plus (26 GB of text)
Authors
-------
**IndoBART** was trained and evaluated by Samuel Cahyawijaya\*, Genta Indra Winata\*, Bryan Wilie\*, Karissa Vincentio\*, Xiaohong Li\*, Adhiguna Kuncoro\*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
If you use our work, please cite:
|
[] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #indogpt #indobenchmark #indonlg #id #arxiv-2104.08200 #license-mit #autotrain_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `bert-base-cased` as starting point and was able to achieve 84% accuracy on the test set.
For more details: [Github](https://github.com/khizon/CS284_final_project)
|
{}
|
khizon/bert-unreliable-news-eng
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of 'bert-base-cased' as starting point and was able to achieve 84% accuracy on the test set.
For more details: Github
|
[
"# Unreliable News Classifier (English)\nTrained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.\nThis model used the pre-trained weights of 'bert-base-cased' as starting point and was able to achieve 84% accuracy on the test set.\n\nFor more details: Github"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Unreliable News Classifier (English)\nTrained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.\nThis model used the pre-trained weights of 'bert-base-cased' as starting point and was able to achieve 84% accuracy on the test set.\n\nFor more details: Github"
] |
text-classification
|
transformers
|
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `distilbert-base-cased` as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having **2.0x** the speed.
For more details: [Github](https://github.com/khizon/CS284_final_project)
|
{}
|
khizon/distilbert-unreliable-news-eng-4L
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of 'distilbert-base-cased' as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having 2.0x the speed.
For more details: Github
|
[
"# Unreliable News Classifier (English)\nTrained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.\nThis model used the pre-trained weights of 'distilbert-base-cased' as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having 2.0x the speed.\n\nFor more details: Github"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Unreliable News Classifier (English)\nTrained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.\nThis model used the pre-trained weights of 'distilbert-base-cased' as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having 2.0x the speed.\n\nFor more details: Github"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-georgian2-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4317
- Wer: 0.4280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7071 | 4.76 | 400 | 0.6897 | 0.7844 |
| 0.2908 | 9.52 | 800 | 0.4630 | 0.5582 |
| 0.1392 | 14.29 | 1200 | 0.4501 | 0.5006 |
| 0.0977 | 19.05 | 1600 | 0.4593 | 0.4755 |
| 0.075 | 23.81 | 2000 | 0.4340 | 0.4401 |
| 0.0614 | 28.57 | 2400 | 0.4317 | 0.4280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-georgian2-colab", "results": []}]}
|
kika2000/wav2vec2-large-xls-r-300m-kika10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-georgian2-colab
=========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4317
* Wer: 0.4280
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika4_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-kika4_my-colab", "results": []}]}
|
kika2000/wav2vec2-large-xls-r-300m-kika4_my-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-kika4_my-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xls-r-300m-kika4_my-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 70\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-kika4_my-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 70\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika5_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3860
- Wer: 0.3505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0007 | 4.82 | 400 | 0.6696 | 0.8283 |
| 0.2774 | 9.64 | 800 | 0.4231 | 0.5476 |
| 0.1182 | 14.46 | 1200 | 0.4253 | 0.5102 |
| 0.0859 | 19.28 | 1600 | 0.4600 | 0.4866 |
| 0.0693 | 24.1 | 2000 | 0.4030 | 0.4533 |
| 0.0611 | 28.92 | 2400 | 0.4189 | 0.4412 |
| 0.0541 | 33.73 | 2800 | 0.4272 | 0.4380 |
| 0.0478 | 38.55 | 3200 | 0.4537 | 0.4505 |
| 0.0428 | 43.37 | 3600 | 0.4349 | 0.4181 |
| 0.038 | 48.19 | 4000 | 0.4562 | 0.4199 |
| 0.0345 | 53.01 | 4400 | 0.4209 | 0.4310 |
| 0.0316 | 57.83 | 4800 | 0.4336 | 0.4058 |
| 0.0288 | 62.65 | 5200 | 0.4004 | 0.3920 |
| 0.025 | 67.47 | 5600 | 0.4115 | 0.3857 |
| 0.0225 | 72.29 | 6000 | 0.4296 | 0.3948 |
| 0.0182 | 77.11 | 6400 | 0.3963 | 0.3772 |
| 0.0165 | 81.93 | 6800 | 0.3921 | 0.3687 |
| 0.0152 | 86.75 | 7200 | 0.3969 | 0.3592 |
| 0.0133 | 91.57 | 7600 | 0.3803 | 0.3527 |
| 0.0118 | 96.39 | 8000 | 0.3860 | 0.3505 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-kika5_my-colab", "results": []}]}
|
kika2000/wav2vec2-large-xls-r-300m-kika5_my-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-kika5\_my-colab
=========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3860
* Wer: 0.3505
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kika_my-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3300
- Wer: 0.5804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8067 | 4.82 | 400 | 1.2892 | 0.8886 |
| 0.3048 | 9.64 | 800 | 1.2285 | 0.6797 |
| 0.1413 | 14.46 | 1200 | 1.1970 | 0.6509 |
| 0.1047 | 19.28 | 1600 | 1.3628 | 0.6166 |
| 0.0799 | 24.1 | 2000 | 1.3345 | 0.6014 |
| 0.0638 | 28.92 | 2400 | 1.3300 | 0.5804 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-kika_my-colab", "results": []}]}
|
kika2000/wav2vec2-large-xls-r-300m-kika_my-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-kika\_my-colab
========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3300
* Wer: 0.5804
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Source Code
[<img src="https://api.flatworld.co/wp-content/uploads/2020/10/DAGsHub-Logo.png" alt="dagshub" width="150"/>](https://dagshub.com/kingabzpro/DailoGPT-RickBot)
[](https://github.com/kingabzpro/DailoGPT-RickBot)
# Testing
```python
tokenizer = AutoTokenizer.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
model = AutoModelWithLMHead.from_pretrained('kingabzpro/DialoGPT-small-Rick-Bot')
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("RickBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
**Result**
perplexity : 8.53
|
{"language": ["en"], "library_name": "transformers", "tags": ["gpt-2"], "datasets": ["ysharma/rickandmorty"], "metrics": ["perplexity"], "pipeline_tag": "conversational"}
|
kingabzpro/DialoGPT-small-Rick-Bot
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt-2",
"conversational",
"en",
"dataset:ysharma/rickandmorty",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #gpt-2 #conversational #en #dataset-ysharma/rickandmorty #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Source Code
<img src="URL alt="dagshub" width="150"/>

model = AutoModelForSeq2SeqLM.from_pretrained("kingabzpro/Helsinki-NLP-opus-yor-mul-en").to('cuda')
# Prediction
a = model.generate(**tokenizer.prepare_seq2seq_batch('Nínú ìpè kan lẹ́yìn ìgbà náà, wọ́n sọ fún aṣojú iléeṣẹ́ BlaBlaCar pé ètò náà ti yí padà, pé',return_tensors='pt').to('cuda'))
text = tokenizer.batch_decode(a)
# Cleaning text
text = str(text)
text = re.sub("<pad> ","",text)
text = re.sub("'","",text)
text = text.replace("[", "")
text = text.replace("]", "")
text
```
## Result
```
'In a statement after that hearing, the BualaCard’s representative was told that the event had changed, that he had turned up.'
```
## ROGUE Score
**0.3025**
|
{"language": ["yo", "en"], "license": "apache-2.0", "tags": ["text", "machine-translation", "language-translation", "seq2seq", "helsinki-nlp"], "metrics": ["rouge"], "pipeline_tag": "translation"}
|
kingabzpro/Helsinki-NLP-opus-yor-mul-en
| null |
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"text",
"machine-translation",
"language-translation",
"seq2seq",
"helsinki-nlp",
"translation",
"yo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"yo",
"en"
] |
TAGS
#transformers #pytorch #safetensors #marian #text2text-generation #text #machine-translation #language-translation #seq2seq #helsinki-nlp #translation #yo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Predicting English Translation
## Result
## ROGUE Score
0.3025
|
[
"## Predicting English Translation",
"## Result",
"## ROGUE Score\n0.3025"
] |
[
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #text #machine-translation #language-translation #seq2seq #helsinki-nlp #translation #yo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Predicting English Translation",
"## Result",
"## ROGUE Score\n0.3025"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-60-Urdu-V8
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4832
- Wer: 0.5729
- Cer: 0.3170
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 19.671 | 8.33 | 100 | 7.7671 | 0.8795 | 0.4492 |
| 2.085 | 16.67 | 200 | 9.2759 | 0.6201 | 0.3320 |
| 0.6633 | 25.0 | 300 | 8.7025 | 0.5738 | 0.3104 |
| 0.388 | 33.33 | 400 | 10.2286 | 0.5852 | 0.3128 |
| 0.2822 | 41.67 | 500 | 11.1953 | 0.5738 | 0.3174 |
| 0.2293 | 50.0 | 600 | 11.4832 | 0.5729 | 0.3170 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "Harveenchadha/vakyansh-wav2vec2-urdu-urm-60", "model-index": [{"name": "wav2vec2-urdu-V8-Abid", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ur", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 44.63, "name": "Test WER"}, {"type": "cer", "value": 18.82, "name": "Test CER"}]}]}]}
|
kingabzpro/wav2vec2-60-Urdu-V8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:Harveenchadha/vakyansh-wav2vec2-urdu-urm-60",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ur"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-60-Urdu-V8
===================
This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 11.4832
* Wer: 0.5729
* Cer: 0.3170
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-large-xlsr-53-urdu
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.5913
- Cer: 0.3310
## Model description
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
## Training procedure
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 12.6045 | 8.33 | 100 | 8.4997 | 0.6978 | 0.3923 |
| 1.3367 | 16.67 | 200 | 5.0015 | 0.6515 | 0.3556 |
| 0.5344 | 25.0 | 300 | 9.3687 | 0.6393 | 0.3625 |
| 0.2922 | 33.33 | 400 | 9.2381 | 0.6236 | 0.3432 |
| 0.1867 | 41.67 | 500 | 6.2150 | 0.6035 | 0.3448 |
| 0.1166 | 50.0 | 600 | 6.4496 | 0.5913 | 0.3310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-60-urdu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ur", "type": "mozilla-foundation/common_voice_7_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 59.1, "name": "Test WER", "args": {"learning_rate": 0.0003, "train_batch_size": 16, "eval_batch_size": 8, "seed": 42, "gradient_accumulation_steps": 2, "total_train_batch_size": 32, "optimizer": "Adam with betas=(0.9,0.999) and epsilon=1e-08", "lr_scheduler_type": "linear", "lr_scheduler_warmup_steps": 200, "num_epochs": 50, "mixed_precision_training": "Native AMP"}}, {"type": "cer", "value": 33.1, "name": "Test CER", "args": {"learning_rate": 0.0003, "train_batch_size": 16, "eval_batch_size": 8, "seed": 42, "gradient_accumulation_steps": 2, "total_train_batch_size": 32, "optimizer": "Adam with betas=(0.9,0.999) and epsilon=1e-08", "lr_scheduler_type": "linear", "lr_scheduler_warmup_steps": 200, "num_epochs": 50, "mixed_precision_training": "Native AMP"}}]}]}]}
|
kingabzpro/wav2vec2-60-urdu
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ur"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-urdu
===========================
This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Wer: 0.5913
* Cer: 0.3310
Model description
-----------------
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
Training procedure
------------------
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9550
- Wer: 0.4551
- Cer: 0.1643
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.663 | 7.69 | 200 | 0.7898 | 0.6039 | 0.1848 |
| 0.7424 | 15.38 | 400 | 1.0215 | 0.5615 | 0.1924 |
| 0.4494 | 23.08 | 600 | 1.0901 | 0.5249 | 0.1932 |
| 0.5075 | 30.77 | 800 | 1.1013 | 0.5079 | 0.1935 |
| 0.4671 | 38.46 | 1000 | 1.1034 | 0.4916 | 0.1827 |
| 0.1928 | 46.15 | 1200 | 0.9550 | 0.4551 | 0.1643 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-1b", "model-index": [{"name": "wav2vec2-large-xls-r-1b-Indonesian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "mozilla-foundation/common_voice_8_0", "args": "id"}, "metrics": [{"type": "wer", "value": 45.51, "name": "Test WER"}, {"type": "cer", "value": 16.43, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "id"}, "metrics": [{"type": "wer", "value": 72.73, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "id"}, "metrics": [{"type": "wer", "value": 79.29, "name": "Test WER"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-1b-Indonesian
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"id",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #id #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-1b-Indonesian
==================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9550
* Wer: 0.4551
* Cer: 0.1643
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 400
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #id #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Irish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3599
- Wer: 0.4236
- Cer: 0.1768
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-1b-Irish --dataset mozilla-foundation/common_voice_8_0 --config ga-IE --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xls-r-1b-Irish"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ga-IE", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 6.3955 | 12.48 | 100 | 2.9897 | 1.0 | 1.0 |
| 2.3811 | 24.97 | 200 | 1.2304 | 0.7140 | 0.3106 |
| 1.0476 | 37.48 | 300 | 1.0661 | 0.5597 | 0.2407 |
| 0.7014 | 49.97 | 400 | 1.1788 | 0.4799 | 0.1947 |
| 0.4409 | 62.48 | 500 | 1.2649 | 0.4658 | 0.1997 |
| 0.4839 | 74.97 | 600 | 1.3259 | 0.4450 | 0.1868 |
| 0.3643 | 87.48 | 700 | 1.3506 | 0.4312 | 0.1760 |
| 0.3468 | 99.97 | 800 | 1.3599 | 0.4236 | 0.1768 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["ga"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-1b", "model-index": [{"name": "wav2vec2-large-xls-r-1b-Irish-Abid", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ga-IE", "type": "mozilla-foundation/common_voice_8_0", "args": "ga-IE"}, "metrics": [{"type": "wer", "value": 38.45, "name": "Test WER With LM"}, {"type": "cer", "value": 16.52, "name": "Test CER With LM"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-1b-Irish
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"ga",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ga #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
wav2vec2-large-xls-r-1b-Irish
=============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3599
* Wer: 0.4236
* Cer: 0.1768
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ga #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Swedish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
**Without LM**
- Loss: 0.3370
- Wer: 18.44
- Cer: 5.75
**With LM**
- Loss: 0.3370
- Wer: 14.04
- Cer: 4.86
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-1b-Swedish --dataset mozilla-foundation/common_voice_8_0 --config sv-SE --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-1b-Swedish --dataset speech-recognition-community-v2/dev_data --config sv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xls-r-1b-Swedish"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sv-SE", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.1562 | 11.11 | 500 | 0.4830 | 0.3729 | 0.1169 |
| 0.5655 | 22.22 | 1000 | 0.3553 | 0.2381 | 0.0743 |
| 0.3376 | 33.33 | 1500 | 0.3359 | 0.2179 | 0.0696 |
| 0.2419 | 44.44 | 2000 | 0.3232 | 0.1844 | 0.0575 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["sv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-1b", "model-index": [{"name": "wav2vec2-large-xls-r-1b-Swedish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "mozilla-foundation/common_voice_8_0", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 14.04, "name": "Test WER With LM"}, {"type": "cer", "value": 4.86, "name": "Test CER With LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sv"}, "metrics": [{"type": "wer", "value": 29.69, "name": "Test WER"}, {"type": "cer", "value": 12.59, "name": "Test CER"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-1b-Swedish
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"sv",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #sv #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-1b-Swedish
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the common\_voice dataset.
It achieves the following results on the evaluation set:
Without LM
* Loss: 0.3370
* Wer: 18.44
* Cer: 5.75
With LM
* Loss: 0.3370
* Wer: 14.04
* Cer: 4.86
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #sv #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-1b #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Wer: 0.2461
- Cer: 0.0666
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.0788 | 4.26 | 200 | 2.9389 | 1.0 | 1.0 |
| 2.8288 | 8.51 | 400 | 2.2535 | 1.0 | 0.8004 |
| 0.907 | 12.77 | 600 | 0.4558 | 0.4243 | 0.1095 |
| 0.4071 | 17.02 | 800 | 0.4013 | 0.3468 | 0.0913 |
| 0.3 | 21.28 | 1000 | 0.4167 | 0.3075 | 0.0816 |
| 0.2544 | 25.53 | 1200 | 0.4132 | 0.2835 | 0.0762 |
| 0.2145 | 29.79 | 1400 | 0.3878 | 0.2693 | 0.0729 |
| 0.1923 | 34.04 | 1600 | 0.4023 | 0.2623 | 0.0702 |
| 0.1681 | 38.3 | 1800 | 0.3984 | 0.2581 | 0.0686 |
| 0.1598 | 42.55 | 2000 | 0.3982 | 0.2493 | 0.0663 |
| 0.1464 | 46.81 | 2200 | 0.4087 | 0.2461 | 0.0666 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-Indonesian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "mozilla-foundation/common_voice_7_0", "args": "id"}, "metrics": [{"type": "wer", "value": 25.06, "name": "Test WER With LM"}, {"type": "cer", "value": 6.5, "name": "Test CER With LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "id"}, "metrics": [{"type": "wer", "value": 99.61, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "id"}, "metrics": [{"type": "wer", "value": 106.39, "name": "Test WER"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-300m-Indonesian
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"id",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #id #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-Indonesian
====================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4087
* Wer: 0.2461
* Cer: 0.0666
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 400
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #id #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Swedish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3641
- Wer: 0.2473
- Cer: 0.0758
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 6.1097 | 5.49 | 500 | 3.1422 | 1.0 | 1.0 |
| 2.985 | 10.98 | 1000 | 1.7357 | 0.9876 | 0.4125 |
| 1.0363 | 16.48 | 1500 | 0.4773 | 0.3510 | 0.1047 |
| 0.6111 | 21.97 | 2000 | 0.3937 | 0.2998 | 0.0910 |
| 0.4942 | 27.47 | 2500 | 0.3779 | 0.2776 | 0.0844 |
| 0.4421 | 32.96 | 3000 | 0.3745 | 0.2630 | 0.0807 |
| 0.4018 | 38.46 | 3500 | 0.3685 | 0.2553 | 0.0781 |
| 0.3759 | 43.95 | 4000 | 0.3618 | 0.2488 | 0.0761 |
| 0.3646 | 49.45 | 4500 | 0.3641 | 0.2473 | 0.0758 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["sv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-xls-r-300m-swedish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "mozilla-foundation/common_voice_8_0", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 24.73, "name": "Test WER", "args": {"learning_rate": 7.5e-05, "train_batch_size": 64, "eval_batch_size": 8, "seed": 42, "gradient_accumulation_steps": 2, "total_train_batch_size": 128, "optimizer": "Adam with betas=(0.9,0.999) and epsilon=1e-08", "lr_scheduler_type": "linear", "lr_scheduler_warmup_steps": 1000, "num_epochs": 50, "mixed_precision_training": "Native AMP"}}, {"type": "cer", "value": 7.58, "name": "Test CER", "args": {"learning_rate": 7.5e-05, "train_batch_size": 64, "eval_batch_size": 8, "seed": 42, "gradient_accumulation_steps": 2, "total_train_batch_size": 128, "optimizer": "Adam with betas=(0.9,0.999) and epsilon=1e-08", "lr_scheduler_type": "linear", "lr_scheduler_warmup_steps": 1000, "num_epochs": 50, "mixed_precision_training": "Native AMP"}}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-300m-Swedish
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"sv",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sv"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #sv #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-Swedish
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3641
* Wer: 0.2473
* Cer: 0.0758
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #sv #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Tatar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5068
- Wer: 0.4263
- Cer: 0.1117
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Tatar --dataset mozilla-foundation/common_voice_8_0 --config tt --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xls-r-300m-Tatar"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "tt", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 8.4116 | 12.19 | 500 | 3.4118 | 1.0 | 1.0 |
| 2.5829 | 24.39 | 1000 | 0.7150 | 0.6151 | 0.1582 |
| 0.4492 | 36.58 | 1500 | 0.5378 | 0.4577 | 0.1210 |
| 0.3007 | 48.77 | 2000 | 0.5068 | 0.4263 | 0.1117 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["tt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-Tatar", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tt", "type": "mozilla-foundation/common_voice_8_0", "args": "tt"}, "metrics": [{"type": "wer", "value": 42.71, "name": "Test WER With LM"}, {"type": "cer", "value": 11.18, "name": "Test CER With LM"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-300m-Tatar
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"tt",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tt"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #tt #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-Tatar
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5068
* Wer: 0.4263
* Cer: 0.1117
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 256
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #tt #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9889
- Wer: 0.5607
- Cer: 0.2370
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-300m-Urdu --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
from datasets import load_dataset, Audio
from transformers import pipeline
model = "kingabzpro/wav2vec2-large-xls-r-300m-Urdu"
data = load_dataset("mozilla-foundation/common_voice_8_0",
"ur",
split="test",
streaming=True,
use_auth_token=True)
sample_iter = iter(data.cast_column("path",
Audio(sampling_rate=16_000)))
sample = next(sample_iter)
asr = pipeline("automatic-speech-recognition", model=model)
prediction = asr(sample["path"]["array"],
chunk_length_s=5,
stride_length_s=1)
prediction
# => {'text': 'اب یہ ونگین لمحاتانکھار دلمیں میںفوث کریلیا اجائ'}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.6398 | 30.77 | 400 | 3.3517 | 1.0 | 1.0 |
| 2.9225 | 61.54 | 800 | 2.5123 | 1.0 | 0.8310 |
| 1.2568 | 92.31 | 1200 | 0.9699 | 0.6273 | 0.2575 |
| 0.8974 | 123.08 | 1600 | 0.9715 | 0.5888 | 0.2457 |
| 0.7151 | 153.85 | 2000 | 0.9984 | 0.5588 | 0.2353 |
| 0.6416 | 184.62 | 2400 | 0.9889 | 0.5607 | 0.2370 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.03 | 39.89 |
|
{"language": ["ur"], "license": "apache-2.0", "tags": ["generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-Urdu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 39.89, "name": "Test WER"}, {"type": "cer", "value": 16.7, "name": "Test CER"}]}]}]}
|
kingabzpro/wav2vec2-large-xls-r-300m-Urdu
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ur"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
---
wav2vec2-large-xls-r-300m-Urdu
==============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9889
* Wer: 0.5607
* Cer: 0.2370
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
### Eval results on Common Voice 8 "test" (WER):
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-300-arabic
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4514
- Wer: 0.4256
- Cer: 0.1528
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-300-arabic --dataset mozilla-foundation/common_voice_7_0 --config ur --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xlsr-300-arabic"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ar", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.4375 | 1.8 | 500 | 3.3330 | 1.0 | 1.0 |
| 2.2187 | 3.6 | 1000 | 0.7790 | 0.6501 | 0.2338 |
| 0.9471 | 5.4 | 1500 | 0.5353 | 0.5015 | 0.1822 |
| 0.7416 | 7.19 | 2000 | 0.4889 | 0.4490 | 0.1640 |
| 0.6358 | 8.99 | 2500 | 0.4514 | 0.4256 | 0.1528 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-xls-r-300m-arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "mozilla-foundation/common_voice_7_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 38.83, "name": "Test WER With LM"}, {"type": "cer", "value": 15.33, "name": "Test CER With LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 89.8, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 87.46, "name": "Test WER"}]}]}]}
|
kingabzpro/wav2vec2-large-xlsr-300-arabic
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ar #dataset-mozilla-foundation/common_voice_7_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-300-arabic
==============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4514
* Wer: 0.4256
* Cer: 0.1528
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ar #dataset-mozilla-foundation/common_voice_7_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-punjabi
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2101
- Wer: 0.4939
- Cer: 0.2238
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-53-punjabi --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xlsr-53-punjabi"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 11.0563 | 3.7 | 100 | 1.9492 | 0.7123 | 0.3872 |
| 1.6715 | 7.41 | 200 | 1.3142 | 0.6433 | 0.3086 |
| 0.9117 | 11.11 | 300 | 1.2733 | 0.5657 | 0.2627 |
| 0.666 | 14.81 | 400 | 1.2730 | 0.5598 | 0.2534 |
| 0.4225 | 18.52 | 500 | 1.2548 | 0.5300 | 0.2399 |
| 0.3209 | 22.22 | 600 | 1.2166 | 0.5229 | 0.2372 |
| 0.2678 | 25.93 | 700 | 1.1795 | 0.5041 | 0.2276 |
| 0.2088 | 29.63 | 800 | 1.2101 | 0.4939 | 0.2238 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["pa"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "base_model": "Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10", "model-index": [{"name": "wav2vec2-punjabi-V8-Abid", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pa-IN", "type": "mozilla-foundation/common_voice_8_0", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 36.02, "name": "Test WER With LM"}, {"type": "cer", "value": 12.81, "name": "Test CER With LM"}]}]}]}
|
kingabzpro/wav2vec2-large-xlsr-53-punjabi
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"pa",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pa"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #pa #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-punjabi
==============================
This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2101
* Wer: 0.4939
* Cer: 0.2238
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #pa #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
## Evaluation on WOLOF Test
[](https://github.com/kingabzpro/WOLOF-ASR-Wav2Vec2)
```python
import pandas as pd
from datasets import load_dataset, load_metric,Dataset
from tqdm import tqdm
import torch
import soundfile as sf
import torchaudio
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2FeatureExtractor
from transformers import Wav2Vec2CTCTokenizer
model_name = "kingabzpro/wav2vec2-large-xlsr-53-wolof"
device = "cuda"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
val =pd.read_csv("../input/automatic-speech-recognition-in-wolof/Test.csv")
val["path"] = "../input/automatic-speech-recognition-in-wolof/Noise Removed/tmp/WOLOF_ASR_dataset/noise_remove/"+val["ID"]+".wav"
val.rename(columns = {'transcription':'sentence'}, inplace = True)
common_voice_val = Dataset.from_pandas(val)
def speech_file_to_array_fn_test(batch):
speech_array, sampling_rate = sf.read(batch["path"])#(.wav) 16000 sample rate
batch["speech"] = speech_array
batch["sampling_rate"] = sampling_rate
return batch
def prepare_dataset_test(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], padding=True,sampling_rate=batch["sampling_rate"][0]).input_values
return batch
common_voice_val = common_voice_val.remove_columns([ "ID","age", "down_votes", "gender", "up_votes"]) # Remove columns
common_voice_val = common_voice_val.map(speech_file_to_array_fn_test, remove_columns=common_voice_val.column_names)# Applying speech_file_to_array function
common_voice_val = common_voice_val.map(prepare_dataset_test, remove_columns=common_voice_val.column_names, batch_size=8, num_proc=4, batched=True)# Applying prepare_dataset_test function
final_pred = []
for i in tqdm(range(common_voice_val.shape[0])):# Testing model on Wolof Dataset
input_dict = processor(common_voice_val[i]["input_values"], return_tensors="pt", padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)[0]
prediction = processor.decode(pred_ids)
final_pred.append(prediction)
```
You can check my result on [Zindi](https://zindi.africa/competitions/ai4d-baamtu-datamation-automatic-speech-recognition-in-wolof/leaderboard), I got 8th rank in AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF
**Result**: 7.88 %
|
{"language": ["wo"], "license": "apache-2.0", "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition"}
|
kingabzpro/wav2vec2-large-xlsr-53-wolof
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"wo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"wo"
] |
TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #wo #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
## Evaluation on WOLOF Test
![github](URL
You can check my result on Zindi, I got 8th rank in AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF
Result: 7.88 %
|
[
"## Evaluation on WOLOF Test\n\n![github](URL\n\nYou can check my result on Zindi, I got 8th rank in AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF\n\nResult: 7.88 %"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #wo #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"## Evaluation on WOLOF Test\n\n![github](URL\n\nYou can check my result on Zindi, I got 8th rank in AI4D Baamtu Datamation - Automatic Speech Recognition in WOLOF\n\nResult: 7.88 %"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.