modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
wietsedv/xlm-roberta-base-ft-udpos28-ar
|
wietsedv
| 2022-02-25T09:58:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"part-of-speech",
"ar",
"dataset:universal_dependencies",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ar
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 62.8
- type: accuracy
name: Dutch Test accuracy
value: 63.5
- type: accuracy
name: German Test accuracy
value: 63.8
- type: accuracy
name: Italian Test accuracy
value: 60.2
- type: accuracy
name: French Test accuracy
value: 58.5
- type: accuracy
name: Spanish Test accuracy
value: 64.9
- type: accuracy
name: Russian Test accuracy
value: 77.2
- type: accuracy
name: Swedish Test accuracy
value: 68.5
- type: accuracy
name: Norwegian Test accuracy
value: 64.6
- type: accuracy
name: Danish Test accuracy
value: 66.1
- type: accuracy
name: Low Saxon Test accuracy
value: 28.0
- type: accuracy
name: Akkadian Test accuracy
value: 3.9
- type: accuracy
name: Armenian Test accuracy
value: 69.4
- type: accuracy
name: Welsh Test accuracy
value: 58.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 55.6
- type: accuracy
name: Albanian Test accuracy
value: 68.1
- type: accuracy
name: Slovenian Test accuracy
value: 64.7
- type: accuracy
name: Guajajara Test accuracy
value: 15.0
- type: accuracy
name: Kurmanji Test accuracy
value: 59.1
- type: accuracy
name: Turkish Test accuracy
value: 62.4
- type: accuracy
name: Finnish Test accuracy
value: 66.9
- type: accuracy
name: Indonesian Test accuracy
value: 66.3
- type: accuracy
name: Ukrainian Test accuracy
value: 77.7
- type: accuracy
name: Polish Test accuracy
value: 77.0
- type: accuracy
name: Portuguese Test accuracy
value: 66.5
- type: accuracy
name: Kazakh Test accuracy
value: 68.1
- type: accuracy
name: Latin Test accuracy
value: 60.9
- type: accuracy
name: Old French Test accuracy
value: 25.6
- type: accuracy
name: Buryat Test accuracy
value: 33.6
- type: accuracy
name: Kaapor Test accuracy
value: 2.5
- type: accuracy
name: Korean Test accuracy
value: 52.0
- type: accuracy
name: Estonian Test accuracy
value: 66.5
- type: accuracy
name: Croatian Test accuracy
value: 73.3
- type: accuracy
name: Gothic Test accuracy
value: 7.2
- type: accuracy
name: Swiss German Test accuracy
value: 30.4
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 19.2
- type: accuracy
name: Naija Test accuracy
value: 26.6
- type: accuracy
name: Latvian Test accuracy
value: 69.9
- type: accuracy
name: Chinese Test accuracy
value: 30.3
- type: accuracy
name: Tagalog Test accuracy
value: 55.1
- type: accuracy
name: Bambara Test accuracy
value: 15.7
- type: accuracy
name: Lithuanian Test accuracy
value: 73.0
- type: accuracy
name: Galician Test accuracy
value: 67.5
- type: accuracy
name: Vietnamese Test accuracy
value: 60.7
- type: accuracy
name: Greek Test accuracy
value: 64.7
- type: accuracy
name: Catalan Test accuracy
value: 60.5
- type: accuracy
name: Czech Test accuracy
value: 75.4
- type: accuracy
name: Erzya Test accuracy
value: 27.3
- type: accuracy
name: Bhojpuri Test accuracy
value: 40.9
- type: accuracy
name: Thai Test accuracy
value: 53.7
- type: accuracy
name: Marathi Test accuracy
value: 68.7
- type: accuracy
name: Basque Test accuracy
value: 59.4
- type: accuracy
name: Slovak Test accuracy
value: 74.7
- type: accuracy
name: Kiche Test accuracy
value: 19.0
- type: accuracy
name: Yoruba Test accuracy
value: 14.9
- type: accuracy
name: Warlpiri Test accuracy
value: 18.6
- type: accuracy
name: Tamil Test accuracy
value: 63.0
- type: accuracy
name: Maltese Test accuracy
value: 15.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 41.1
- type: accuracy
name: Icelandic Test accuracy
value: 61.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 20.3
- type: accuracy
name: Urdu Test accuracy
value: 57.4
- type: accuracy
name: Romanian Test accuracy
value: 68.4
- type: accuracy
name: Persian Test accuracy
value: 76.1
- type: accuracy
name: Apurina Test accuracy
value: 22.4
- type: accuracy
name: Japanese Test accuracy
value: 17.9
- type: accuracy
name: Hungarian Test accuracy
value: 61.1
- type: accuracy
name: Hindi Test accuracy
value: 64.1
- type: accuracy
name: Classical Chinese Test accuracy
value: 5.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 30.9
- type: accuracy
name: Faroese Test accuracy
value: 54.4
- type: accuracy
name: Sanskrit Test accuracy
value: 4.9
- type: accuracy
name: Livvi Test accuracy
value: 40.3
- type: accuracy
name: Arabic Test accuracy
value: 75.9
- type: accuracy
name: Wolof Test accuracy
value: 14.6
- type: accuracy
name: Bulgarian Test accuracy
value: 75.3
- type: accuracy
name: Akuntsu Test accuracy
value: 10.5
- type: accuracy
name: Makurap Test accuracy
value: 2.1
- type: accuracy
name: Kangri Test accuracy
value: 29.2
- type: accuracy
name: Breton Test accuracy
value: 39.1
- type: accuracy
name: Telugu Test accuracy
value: 63.2
- type: accuracy
name: Cantonese Test accuracy
value: 30.1
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 27.7
- type: accuracy
name: Karelian Test accuracy
value: 44.2
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.6
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 58.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 28.7
- type: accuracy
name: Irish Test accuracy
value: 51.4
- type: accuracy
name: Nayini Test accuracy
value: 26.9
- type: accuracy
name: Munduruku Test accuracy
value: 7.0
- type: accuracy
name: Manx Test accuracy
value: 18.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 25.9
- type: accuracy
name: Afrikaans Test accuracy
value: 62.5
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 18.3
- type: accuracy
name: Belarusian Test accuracy
value: 77.2
- type: accuracy
name: Serbian Test accuracy
value: 73.7
- type: accuracy
name: Moksha Test accuracy
value: 26.2
- type: accuracy
name: Western Armenian Test accuracy
value: 58.5
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 40.4
- type: accuracy
name: Khunsari Test accuracy
value: 29.7
- type: accuracy
name: Hebrew Test accuracy
value: 77.1
- type: accuracy
name: Uyghur Test accuracy
value: 56.2
- type: accuracy
name: Chukchi Test accuracy
value: 27.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Arabic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ar")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ar")
```
|
mohamed-illiyas/wav2vec-malayalam-checkpoint
|
mohamed-illiyas
| 2022-02-25T09:24:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-malayalam-checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-malayalam-checkpoint
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
- Wer: 0.6608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6371 | 10.0 | 100 | 3.5200 | 1.0 |
| 3.3014 | 20.0 | 200 | 3.2092 | 1.0 |
| 1.2997 | 30.0 | 300 | 0.7134 | 0.8847 |
| 0.5078 | 40.0 | 400 | 0.5805 | 0.7841 |
| 0.3795 | 50.0 | 500 | 0.5604 | 0.7289 |
| 0.2809 | 60.0 | 600 | 0.5962 | 0.7055 |
| 0.2381 | 70.0 | 700 | 0.6099 | 0.6938 |
| 0.2046 | 80.0 | 800 | 0.6237 | 0.6862 |
| 0.1826 | 90.0 | 900 | 0.6204 | 0.6755 |
| 0.1627 | 100.0 | 1000 | 0.6335 | 0.6751 |
| 0.1453 | 110.0 | 1100 | 0.6446 | 0.6739 |
| 0.1359 | 120.0 | 1200 | 0.6277 | 0.6648 |
| 0.1274 | 130.0 | 1300 | 0.6356 | 0.6573 |
| 0.1189 | 140.0 | 1400 | 0.6417 | 0.6601 |
| 0.1146 | 150.0 | 1500 | 0.6457 | 0.6608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
khavitidala/finetuned-indobartv2-id-su
|
khavitidala
| 2022-02-25T09:23:22Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"indogpt",
"indobenchmark",
"indonlg",
"id",
"arxiv:2104.08200",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoBART-v2 Model fine-tuned version
Fine-tuned version of IndoBART-v2 with machine translation id->su using default hyperparameter from indoBART paper.
by Ryan Abdurohman
# IndoBART-v2 Model
[IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
## Authors
<b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-2
|
anas-awadalla
| 2022-02-25T09:11:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-32-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-32-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-10
|
anas-awadalla
| 2022-02-25T08:37:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-8
|
anas-awadalla
| 2022-02-25T08:21:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
deepakvk/distilbert-base-uncased-distilled-squad-finetuned-squad
|
deepakvk
| 2022-02-25T08:04:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-distilled-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-squad-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
adresgezgini/Wav2Vec2-tr-AG-v1
|
adresgezgini
| 2022-02-25T08:02:34Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
model = Wav2Vec2ForCTC.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
```
Dosyalar bölümünde paylaşılan ses1.mp3[1], ses1.mp3[2] ve ses1.mp3[3] ses dosyaları açık kaynaklı canlı kitap ses kayıtları üzerinden 1 - 1.5 dakika arasında belli bir kısmın alınması ile oluşturulmuştur. Oluşturulan sesler ile model test edilmiş ve WER değerleri kaydedilmiştir.
<div align="center">
|Sesler|WER|
| :---: | :---: |
|SES1.mp3|0,17|
|SES2.mp3|0,31|
|SES3.mp3|0,20|
</div>
[1][Sabahattin Ali - Çaydanlık | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=IHUfOpqw-8s)\
[2][Sabahattin Ali - Ses | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=XzX2wBjncOg)\
[3][Sabahattin Ali - Sıçra Köşk | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=SJwUaq0Nu9c)\
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-4
|
anas-awadalla
| 2022-02-25T07:47:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000
|
ASCCCCCCCC
| 2022-02-25T07:33:20Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3031
- Accuracy: 0.4406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.396 | 1.0 | 1250 | 1.3031 | 0.4406 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-2
|
anas-awadalla
| 2022-02-25T07:30:55Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-0
|
anas-awadalla
| 2022-02-25T07:13:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-16-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
|
anas-awadalla
| 2022-02-25T06:39:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
|
ASCCCCCCCC
| 2022-02-25T06:26:43Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-chinese-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1518
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.196 | 1.0 | 1250 | 1.1518 | 0.5092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
|
anas-awadalla
| 2022-02-25T06:05:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
|
anas-awadalla
| 2022-02-25T05:13:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
|
anas-awadalla
| 2022-02-25T04:42:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
hfl/chinese-pert-large
|
hfl
| 2022-02-25T04:09:23Z | 61 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"zh",
"license:cc-by-nc-sa-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: "cc-by-nc-sa-4.0"
---
# Please use 'Bert' related functions to load this model!
Under construction...
Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
|
anas-awadalla
| 2022-02-25T03:55:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
|
anas-awadalla
| 2022-02-25T02:55:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
shields/wav2vec2-base-20sec-timit-and-dementiabank
|
shields
| 2022-02-25T02:39:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-20sec-timit-and-dementiabank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-20sec-timit-and-dementiabank
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4338
- Wer: 0.2313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6839 | 2.53 | 500 | 2.7287 | 1.0 |
| 0.8708 | 5.05 | 1000 | 0.5004 | 0.3490 |
| 0.2879 | 7.58 | 1500 | 0.4411 | 0.2872 |
| 0.1877 | 10.1 | 2000 | 0.4359 | 0.2594 |
| 0.1617 | 12.63 | 2500 | 0.4404 | 0.2492 |
| 0.1295 | 15.15 | 3000 | 0.4356 | 0.2418 |
| 0.1146 | 17.68 | 3500 | 0.4338 | 0.2313 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
|
anas-awadalla
| 2022-02-25T02:11:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Rattana/wav2vec2-thai-ASR
|
Rattana
| 2022-02-25T02:08:35Z | 22 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-thai-ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6108
- Wer: 0.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.1123 | 2.65 | 400 | 3.3946 | 1.0002 |
| 1.5734 | 5.3 | 800 | 0.6881 | 0.7290 |
| 0.5934 | 7.94 | 1200 | 0.5789 | 0.6402 |
| 0.4059 | 10.59 | 1600 | 0.5496 | 0.5976 |
| 0.3136 | 13.24 | 2000 | 0.6109 | 0.5863 |
| 0.2546 | 15.89 | 2400 | 0.6113 | 0.5865 |
| 0.2184 | 18.54 | 2800 | 0.6108 | 0.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
|
anas-awadalla
| 2022-02-25T01:41:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
|
anas-awadalla
| 2022-02-25T01:13:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-64-finetuned-squad-seed-6
|
anas-awadalla
| 2022-02-25T00:11:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-64-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
|
anas-awadalla
| 2022-02-24T23:09:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
|
anas-awadalla
| 2022-02-24T22:39:42Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
|
anas-awadalla
| 2022-02-24T22:24:38Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
|
anas-awadalla
| 2022-02-24T21:54:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
|
anas-awadalla
| 2022-02-24T21:24:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
|
anas-awadalla
| 2022-02-24T21:09:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
damlab/HIV_PR_resist
|
damlab
| 2022-02-24T20:28:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
---
# HIV_PR_resist model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the [Stanford HIV Genotype-Phenotype Database](https://hivdb.stanford.edu/pages/genotype-phenotype.html), allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.
## Intended Uses & Limitations
This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.
## How to use
*Prediction example of protease sequences*
## Training Data
This model was trained using the [damlab/HIV-PI dataset](https://huggingface.co/datasets/damlab/HIV_PI) using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
damlab/HIV_V3_bodysite
|
damlab
| 2022-02-24T19:18:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:damlab/HIV_V3_bodysite",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
licence: mit
widget:
- text: "T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
example_title: "V3 Macrophage"
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
example_title: "V3 T-cell"
datasets:
- damlab/HIV_V3_bodysite
metrics:
- accuracy
---
# Model Card for [HIV_V3_bodysite]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of body site location than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from.
## Intended Uses & Limitations
This tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool.
This tool was trained using the Los Alamos HIV sequence dataset (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
This model is able to predict the likely bodysite from a V3 sequence.
This may be use for surveillance of cells that are emerging from latent reservoirs.
Remember, a sequence can come from multiple sites, they are not mutually exclusive.
```python
from transformers import pipeline
predictor = pipeline("text-classification", model="damlab/HIV_V3_bodysite")
predictor(f"C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C")
[
[
{
"label": "periphery-tcell",
"score": 0.29097115993499756
},
{
"label": "periphery-monocyte",
"score": 0.014322502538561821
},
{
"label": "CNS",
"score": 0.06870711594820023
},
{
"label": "breast-milk",
"score": 0.002785981632769108
},
{
"label": "female-genitals",
"score": 0.024997007101774216
},
{
"label": "male-genitals",
"score": 0.01040483545511961
},
{
"label": "gastric",
"score": 0.06872137635946274
},
{
"label": "lung",
"score": 0.04432062804698944
},
{
"label": "organ",
"score": 0.47476938366889954
}
]
]
```
## Training Data
This model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.
## Training Procedure
### Preprocessing
As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k
|
vocab-transformers
| 2022-02-24T19:08:20Z | 93 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k
This model is based on [msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: - (MRR@10)
- TREC-DL 2019: 65.53 (nDCG@10)
- TREC-DL 2020: 67.42 (nDCG@10)
- Avg. on 4 BEIR datasets: 38.97
The word embedding matrix has been frozen while training.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
damlab/HIV_BERT
|
damlab
| 2022-02-24T18:59:51Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"dataset:damlab/HIV_FLT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- damlab/HIV_FLT
metrics:
- accuracy
widget:
- text: 'C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C'
example_title: 'V3'
- text: 'M E P V D P R L E P W K H P G S Q P K T A C T N C Y C K K C C F H C Q V C F I T K A L G I S Y G R K K R R Q R R R A H Q N S Q T H Q A S L S K Q P T S Q P R G D P T G P K E S K K K V E R E T E T D P F D'
example_title: 'Tat'
- text: 'P Q I T L W Q R P L V T I K I G G Q L K E A L L D T G A D D T V L E E M N L P G R W K P K M I G G I G G F I K V R Q Y D Q I L I E I C G H K A I G T V L V G P T P V N I I G R N L L T Q I G C T L N F'
example_title: 'PR'
---
# HIV_BERT model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT model was trained as a refinement of the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) for HIV centric tasks. It was refined with whole viral genomes from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link).
## Model Description
Like the original [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd), this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate.
## Intended Uses & Limitations
As a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks.
## How to use
As this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position.
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="damlab/HIV_FLT")
unmasker(f"C T R P N [MASK] N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C")
[
{
"score": 0.9581968188285828,
"token": 17,
"token_str": "N",
"sequence": "C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.022986575961112976,
"token": 12,
"token_str": "K",
"sequence": "C T R P N K N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.003997281193733215,
"token": 14,
"token_str": "D",
"sequence": "C T R P N D N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.003636382520198822,
"token": 15,
"token_str": "T",
"sequence": "C T R P N T N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.002701344434171915,
"token": 10,
"token_str": "S",
"sequence": "C T R P N S N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
}
]
```
## Training Data
The dataset [damlab/HIV_FLT](https://huggingface.co/datasets/damlab/HIV_FLT) was used to refine the original [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd). This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd) model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.
## BibTeX Entry and Citation Info
[More Information Needed]
|
lilitket/wav2vec2-large-xls-r-300m-turkish-colab
|
lilitket
| 2022-02-24T18:57:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7126
- Wer: 0.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 6.7419 | 2.38 | 200 | 3.1913 | 1.0 |
| 3.0446 | 4.76 | 400 | 2.3247 | 1.0 |
| 1.3163 | 7.14 | 600 | 1.2629 | 0.9656 |
| 0.6058 | 9.52 | 800 | 1.2203 | 0.9343 |
| 0.3687 | 11.9 | 1000 | 1.2157 | 0.8849 |
| 0.2644 | 14.29 | 1200 | 1.3693 | 0.8992 |
| 0.2147 | 16.67 | 1400 | 1.3321 | 0.8623 |
| 0.1962 | 19.05 | 1600 | 1.3476 | 0.8886 |
| 0.1631 | 21.43 | 1800 | 1.3984 | 0.8755 |
| 0.15 | 23.81 | 2000 | 1.4602 | 0.8798 |
| 0.1311 | 26.19 | 2200 | 1.4727 | 0.8836 |
| 0.1174 | 28.57 | 2400 | 1.5257 | 0.8805 |
| 0.1155 | 30.95 | 2600 | 1.4697 | 0.9337 |
| 0.1046 | 33.33 | 2800 | 1.6076 | 0.8667 |
| 0.1063 | 35.71 | 3000 | 1.5012 | 0.8861 |
| 0.0996 | 38.1 | 3200 | 1.6204 | 0.8605 |
| 0.088 | 40.48 | 3400 | 1.4788 | 0.8586 |
| 0.089 | 42.86 | 3600 | 1.5983 | 0.8648 |
| 0.0805 | 45.24 | 3800 | 1.5045 | 0.8298 |
| 0.0718 | 47.62 | 4000 | 1.6361 | 0.8611 |
| 0.0718 | 50.0 | 4200 | 1.5088 | 0.8548 |
| 0.0649 | 52.38 | 4400 | 1.5491 | 0.8554 |
| 0.0685 | 54.76 | 4600 | 1.5939 | 0.8442 |
| 0.0588 | 57.14 | 4800 | 1.6321 | 0.8536 |
| 0.0591 | 59.52 | 5000 | 1.6468 | 0.8442 |
| 0.0529 | 61.9 | 5200 | 1.6086 | 0.8661 |
| 0.0482 | 64.29 | 5400 | 1.6622 | 0.8517 |
| 0.0396 | 66.67 | 5600 | 1.6191 | 0.8436 |
| 0.0463 | 69.05 | 5800 | 1.6231 | 0.8661 |
| 0.0415 | 71.43 | 6000 | 1.6874 | 0.8511 |
| 0.0383 | 73.81 | 6200 | 1.7054 | 0.8411 |
| 0.0411 | 76.19 | 6400 | 1.7073 | 0.8486 |
| 0.0346 | 78.57 | 6600 | 1.7137 | 0.8342 |
| 0.0318 | 80.95 | 6800 | 1.6523 | 0.8329 |
| 0.0299 | 83.33 | 7000 | 1.6893 | 0.8579 |
| 0.029 | 85.71 | 7200 | 1.7162 | 0.8429 |
| 0.025 | 88.1 | 7400 | 1.7589 | 0.8529 |
| 0.025 | 90.48 | 7600 | 1.7581 | 0.8398 |
| 0.0232 | 92.86 | 7800 | 1.8459 | 0.8442 |
| 0.0215 | 95.24 | 8000 | 1.7942 | 0.8448 |
| 0.0222 | 97.62 | 8200 | 1.6848 | 0.8442 |
| 0.0179 | 100.0 | 8400 | 1.7223 | 0.8298 |
| 0.0176 | 102.38 | 8600 | 1.7426 | 0.8404 |
| 0.016 | 104.76 | 8800 | 1.7501 | 0.8411 |
| 0.0153 | 107.14 | 9000 | 1.7185 | 0.8235 |
| 0.0136 | 109.52 | 9200 | 1.7250 | 0.8292 |
| 0.0117 | 111.9 | 9400 | 1.7159 | 0.8185 |
| 0.0123 | 114.29 | 9600 | 1.7135 | 0.8248 |
| 0.0121 | 116.67 | 9800 | 1.7189 | 0.8210 |
| 0.0116 | 119.05 | 10000 | 1.7126 | 0.8198 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
damlab/HIV_V3_Coreceptor
|
damlab
| 2022-02-24T18:34:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
widget:
- text: 'C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C'
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
- text: 'C T R P N N N T R R S I R I G P G Q A F Y A T G D I I G D I R Q A H C'
- text: 'C G R P N N H R I K G L R I G P G R A F F A M G A I G G G E I R Q A H C'
---
# HIV_V3_coreceptor model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Coreceptor model was trained as a refinement of the [HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors.
## Intended Uses & Limitations
This tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool.
This tool was trained using the [Los Alamos HIV sequence dataset](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
*Need to add*
## Training Data
This model was trained using the [damlab/HIV_V3_coreceptor dataset](https://huggingface.co/datasets/damlab/HIV_V3_coreceptor) using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the [Los Alamos HIV Sequence database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html).
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
aypan17/distilgpt2-imdb
|
aypan17
| 2022-02-24T18:33:38Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-imdb
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [imdb](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nateraw/keras-dummy-sequential-demo-with-card
|
nateraw
| 2022-02-24T18:18:08Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
anantoj/wav2vec2-large-xlsr-53-adult-child-cls
|
anantoj
| 2022-02-24T15:59:19Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-xls-r-300m-adult-child-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1755
- Accuracy: 0.9432
- F1: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.368 | 1.0 | 383 | 0.2560 | 0.9072 | 0.9126 |
| 0.2013 | 2.0 | 766 | 0.1959 | 0.9321 | 0.9362 |
| 0.22 | 3.0 | 1149 | 0.1755 | 0.9432 | 0.9472 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jj-co/sbert-feature_extraction
|
jj-co
| 2022-02-24T15:53:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
lilitket/wav2vec2-large-xls-r-armenian-colab
|
lilitket
| 2022-02-24T14:51:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-armenian-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-armenian-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
izzy-lazerson/wav2vec2-base-timit-demo-colab
|
izzy-lazerson
| 2022-02-24T13:44:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4545
- Wer: 0.3450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3801 | 4.0 | 500 | 1.1501 | 0.8820 |
| 0.561 | 8.0 | 1000 | 0.4583 | 0.4211 |
| 0.2198 | 12.0 | 1500 | 0.4467 | 0.3997 |
| 0.1255 | 16.0 | 2000 | 0.4390 | 0.3677 |
| 0.0862 | 20.0 | 2500 | 0.4934 | 0.3603 |
| 0.0617 | 24.0 | 3000 | 0.4641 | 0.3549 |
| 0.0465 | 28.0 | 3500 | 0.4545 | 0.3450 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
debjyoti007/new_doc_classifier
|
debjyoti007
| 2022-02-24T13:22:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too.
|
shiromart/distilbert-base-uncased-finetuned-squad
|
shiromart
| 2022-02-24T13:20:12Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: shiromart/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shiromart/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9821
- Validation Loss: 1.1179
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5135 | 1.1688 | 0 |
| 0.9821 | 1.1179 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.6.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
juanhebert/wav2vec2-indonesia
|
juanhebert
| 2022-02-24T12:34:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-indonesia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-indonesia
This model is a fine-tuned version of [juanhebert/wav2vec2-indonesia](https://huggingface.co/juanhebert/wav2vec2-indonesia) on the commonvoice "id" dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0727
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.8744 | 0.68 | 200 | 3.0301 | 1.0 |
| 2.868 | 1.36 | 400 | 3.0727 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Krystalan/mdialbart_zh
|
Krystalan
| 2022-02-24T12:11:13Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: cc-by-nc-sa-4.0
---
## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599).
|
cammy/t5-base-finetuned-weaksup-1000
|
cammy
| 2022-02-24T10:26:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-weaksup-1000
This model is a fine-tuned version of [cammy/t5-base-finetuned-weaksup-1000](https://huggingface.co/cammy/t5-base-finetuned-weaksup-1000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6699
- Rouge1: 22.2079
- Rouge2: 9.54
- Rougel: 19.9593
- Rougelsum: 20.2524
- Gen Len: 18.17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.6257 | 1.0 | 1000 | 1.6699 | 22.2079 | 9.54 | 19.9593 | 20.2524 | 18.17 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
moshew/minylm-L3-aug-sst2-distilled
|
moshew
| 2022-02-24T09:50:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
{'test_accuracy': 0.911697247706422,
'test_loss': 0.24090610444545746,
'test_runtime': 0.4372,
'test_samples_per_second': 1994.475,
'test_steps_per_second': 16.011}
|
aypan17/roberta-base-imdb
|
aypan17
| 2022-02-24T07:33:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
---
TrainingArgs:
lr=2e-5,
train-batch-size=16,
eval-batch-size=16,
num-train-epochs=5,
weight-decay=0.01,
|
hfl/english-pert-large
|
hfl
| 2022-02-24T02:58:41Z | 31 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"en",
"license:cc-by-nc-sa-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
license: "cc-by-nc-sa-4.0"
---
# Please use 'Bert' related functions to load this model!
# ALL English models are UNCASED (lowercase=True)
Under construction...
Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
|
jaketae/hifigan-lj-v1
|
jaketae
| 2022-02-23T23:22:01Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hifigan",
"feature-extraction",
"audio",
"text-to-speech",
"custom_code",
"en",
"dataset:ljspeech",
"arxiv:2010.05646",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- ljspeech
tags:
- audio
- text-to-speech
---
# HiFi-GAN
[HiFi-GAN](https://arxiv.org/abs/2010.05646) vocoder trained on the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/). The modeling code is based on the [official implementation](https://github.com/jik876/hifi-gan) and the [fairseq adaptation](https://github.com/pytorch/fairseq).
## Usage
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("jaketae/hifigan-lj-v1", trust_remote_code=True)
```
|
PhilSad/gpt-scp-neo-125M
|
PhilSad
| 2022-02-23T22:41:55Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output_gptneo125-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_gptneo125-2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ayham/roberta_bert_summarization_cnn_dailymail
|
Ayham
| 2022-02-23T22:17:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_bert_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_bert_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
MarcBrun/ixambert-finetuned-squad-eu-en
|
MarcBrun
| 2022-02-23T20:25:49Z | 44 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
language:
- en
- es
- eu
datasets:
- squad
widget:
- text: "When was Florence Nightingale born?"
context: "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820."
example_title: "English"
- text: "¿Por qué provincias pasa el Tajo?"
context: "El Tajo es el río más largo de la península ibérica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinación hacia el suroeste, que se acentúa cuando llega a Portugal, donde recibe el nombre de Tejo.
Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occidental del sistema Ibérico y, después de recorrer 1007 km, llega al océano Atlántico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m³/s. En sus primeros 816 km atraviesa España, donde discurre por cuatro comunidades autónomas (Aragón, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y Cáceres)."
example_title: "Español"
- text: "Zer beste izenak ditu Tartalo?"
context: "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote."
example_title: "Euskara"
---
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Eval data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu-en"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
```
|
sw005320/Shinji_Watanabe_laborotv_asr_train_blstm
|
sw005320
| 2022-02-23T20:25:19Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"jp",
"dataset:laborotv",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: jp
datasets:
- laborotv
license: cc-by-4.0
---
## ESPnet2 ASR model
### `sw005320/Shinji_Watanabe_laborotv_asr_train_blstm`
This model was trained by Shinji Watanabe using laborotv recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 9963fc53747c26417023546d3449e92884f13be0
pip install -e .
cd egs2/laborotv/asr1
./run.sh --skip_data_prep false --skip_train true --download_model sw005320/Shinji_Watanabe_laborotv_asr_train_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri May 14 08:32:17 EDT 2021`
- python version: `3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.7.1`
- Git hash: `8c580e3da5d8a308ccdab104fdc29de114e56c60`
- Commit date: `Wed May 5 13:26:08 2021 -0400`
## asr_train_asr_rnn_raw_jp_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_jp_char_valid.loss.ave_asr_model_valid.acc.ave/dev|12000|12000|36.1|63.9|0.0|0.0|63.9|63.9|
|decode_asr_lm_lm_train_lm_jp_char_valid.loss.ave_asr_model_valid.acc.ave/dev_4k|3971|3971|41.7|58.3|0.0|0.0|58.3|58.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_jp_char_valid.loss.ave_asr_model_valid.acc.ave/dev|12000|273004|89.3|6.3|4.4|3.0|13.7|63.9|
|decode_asr_lm_lm_train_lm_jp_char_valid.loss.ave_asr_model_valid.acc.ave/dev_4k|3971|98424|91.9|4.7|3.3|2.4|10.5|58.3|
|decode_asr_lm_lm_train_lm_jp_char_valid.loss.ave_asr_model_valid.acc.ave/tedx-jp-10k|10000|190568|0.0|0.0|100.0|0.0|100.0|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_jp_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 40852
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 8
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
detect_anomaly: false
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 128
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_jp_char_sp/train/speech_shape
- exp/asr_stats_raw_jp_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_jp_char_sp/valid/speech_shape
- exp/asr_stats_raw_jp_char_sp/valid/text_shape.char
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev_sp/wav.scp
- speech
- sound
- - dump/raw/train_nodev_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_4k/wav.scp
- speech
- sound
- - dump/raw/dev_4k/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 1.0
rho: 0.95
eps: 1.0e-08
weight_decay: 0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- い
- の
- で
- て
- と
- し
- た
- す
- に
- な
- が
- っ
- ま
- か
- う
- は
- る
- ん
- を
- こ
- れ
- も
- ら
- り
- さ
- ー
- あ
- そ
- く
- だ
- き
- け
- よ
- ど
- ン
- ね
- ち
- お
- え
- や
- ス
- 人
- 一
- わ
- 日
- つ
- 十
- め
- イ
- ト
- ゃ
- 大
- ル
- じ
- 今
- み
- 二
- ょ
- ッ
- せ
- ラ
- 中
- ろ
- リ
- ク
- 見
- 思
- 事
- 出
- 分
- 感
- 時
- ア
- ご
- 三
- 本
- 方
- 上
- ば
- 行
- 者
- タ
- ロ
- 生
- 気
- 間
- コ
- 年
- 前
- 言
- ほ
- シ
- カ
- ず
- 自
- 入
- マ
- レ
- 国
- 手
- 子
- 会
- 五
- 染
- メ
- 新
- プ
- ナ
- 何
- ド
- 四
- 場
- ジ
- チ
- ウ
- フ
- 後
- 合
- 月
- 対
- 地
- 東
- 当
- バ
- 体
- 全
- べ
- 回
- 目
- テ
- 性
- 最
- 発
- 部
- 百
- 先
- 動
- げ
- 私
- 続
- 高
- ャ
- 作
- 来
- パ
- 取
- キ
- グ
- 家
- 的
- 業
- 所
- オ
- 度
- 長
- 雨
- ム
- 食
- 実
- 内
- 話
- 開
- ぐ
- 下
- 京
- 九
- 六
- 多
- 持
- ニ
- 関
- サ
- 県
- 代
- 学
- 状
- 明
- 七
- 八
- デ
- 市
- 意
- 理
- ュ
- 千
- 田
- へ
- ハ
- ぱ
- 水
- 数
- 物
- 現
- び
- ビ
- 使
- ブ
- 外
- 通
- 心
- 知
- 要
- 番
- 変
- 用
- 以
- 店
- 山
- ミ
- 立
- 力
- 女
- 金
- 確
- 定
- ィ
- 々
- 都
- 員
- ざ
- 型
- 小
- 考
- ピ
- 選
- 強
- 野
- 近
- 結
- ぶ
- 安
- 検
- 表
- 川
- 初
- ダ
- エ
- 受
- 世
- 同
- ツ
- ズ
- 味
- ふ
- 不
- 込
- 認
- ホ
- 報
- 予
- 風
- 査
- 警
- ポ
- 向
- 海
- む
- ョ
- 道
- 社
- 北
- 活
- 名
- 切
- 少
- 男
- ワ
- ひ
- 決
- 連
- 様
- 聞
- 重
- 万
- 解
- ぎ
- 始
- 調
- 症
- 戦
- 化
- ボ
- 面
- 相
- 広
- 付
- 問
- セ
- ベ
- 増
- 政
- 島
- 僕
- モ
- 期
- 車
- 経
- 特
- ネ
- 策
- ケ
- 楽
- 違
- 議
- 能
- 害
- 必
- 止
- 界
- 屋
- 伝
- 組
- 常
- 次
- 民
- ガ
- 加
- 再
- 元
- 態
- 題
- 降
- 機
- 週
- 指
- 仕
- 円
- 勝
- 影
- 校
- 正
- 点
- 集
- 流
- 書
- 引
- 情
- 院
- 主
- 皆
- 法
- 急
- 客
- 台
- 難
- 木
- 料
- 身
- 起
- 疑
- 進
- 成
- 空
- 応
- 真
- 口
- 品
- 防
- 在
- 況
- 教
- 保
- ェ
- 原
- 好
- 病
- 着
- 色
- 画
- 運
- 半
- 務
- 果
- ぜ
- 夜
- 件
- 朝
- 然
- 直
- 過
- ペ
- 医
- 別
- 置
- 俺
- ぞ
- 可
- 制
- 葉
- 無
- 設
- 温
- ゆ
- 死
- 療
- 線
- 住
- 早
- ソ
- 夫
- 注
- 判
- 呼
- 公
- 信
- 治
- 容
- 電
- 待
- 響
- 例
- 午
- 察
- 想
- 支
- 落
- 府
- 和
- 配
- 歳
- 打
- 休
- 売
- 村
- 親
- 残
- ギ
- 段
- 乗
- 去
- 平
- 転
- 際
- 終
- 天
- 足
- 形
- 張
- 白
- 記
- 位
- 利
- 側
- 非
- 観
- 井
- 土
- 美
- 被
- 送
- 球
- 総
- 第
- 声
- 映
- 宅
- ノ
- 係
- ゴ
- 熱
- 願
- 断
- 神
- 火
- T
- 営
- 材
- 更
- 西
- 藤
- 文
- 構
- 光
- 消
- 母
- 産
- 投
- 帰
- 州
- 飲
- 殺
- 象
- 拡
- 町
- 接
- 離
- 割
- 党
- 示
- 君
- 有
- 頂
- 辺
- 減
- N
- 悪
- 優
- 職
- 局
- 除
- 緒
- S
- 超
- 歌
- 得
- 南
- 備
- ヒ
- ぼ
- 返
- 戻
- 歩
- 済
- 父
- づ
- 演
- 避
- 由
- 達
- R
- ァ
- 介
- 試
- 昨
- 音
- 収
- 彼
- 交
- 亡
- 限
- 反
- 街
- 像
- 参
- 園
- 役
- 門
- 統
- 育
- 岡
- ザ
- 命
- 族
- 夏
- ヤ
- 工
- 路
- 量
- 買
- 速
- 飛
- 誰
- 肉
- 験
- 太
- 働
- 区
- 頭
- 士
- 字
- 顔
- 官
- 域
- 若
- 追
- 施
- 姿
- 花
- 危
- A
- 石
- 災
- 説
- C
- 師
- 愛
- 絶
- 守
- o
- 計
- 覚
- 移
- 横
- 突
- 各
- 告
- 焼
- 団
- 激
- 曜
- 種
- 階
- 香
- 専
- 負
- 領
- 基
- 求
- 黒
- 周
- 渡
- 谷
- 放
- 緊
- 供
- 援
- %
- 曲
- 護
- P
- 任
- 語
- 改
- 犯
- 覧
- 省
- 共
- 差
- 船
- 両
- 暑
- 赤
- 倍
- 雲
- 視
- 失
- 個
- 患
- 挙
- ゲ
- 勢
- 寄
- 古
- 毎
- 念
- 旅
- 触
- m
- 深
- 細
- 捕
- 戒
- 習
- 式
- 助
- ・
- 録
- 撮
- 崎
- 波
- 証
- 補
- 冷
- 普
- 登
- 撃
- 規
- 識
- 室
- 比
- 効
- 低
- 提
- 請
- 玉
- 館
- 談
- 素
- 福
- 格
- 臣
- 越
- 写
- 振
- 苦
- 存
- O
- 質
- 首
- 権
- 戸
- 険
- 技
- 紹
- 申
- 厳
- 密
- I
- 完
- ヨ
- 頼
- 宣
- 捜
- 案
- 囲
- 準
- 器
- 担
- 単
- 座
- 号
- 根
- 復
- 押
- ぁ
- 建
- 具
- 氏
- 約
- ぬ
- 抜
- 恐
- 走
- 松
- 術
- 我
- 宮
- 帯
- 訪
- 研
- 企
- 答
- ォ
- 軍
- 届
- 馬
- 港
- 鮮
- 協
- 青
- 厚
- 商
- 阪
- 吉
- 森
- 資
- 宿
- 庁
- 婚
- 程
- 菜
- 景
- 率
- 席
- 含
- 良
- 健
- 梅
- 薬
- 究
- 類
- 満
- 舞
- 丈
- 論
- 末
- 給
- 軽
- 奥
- 王
- 裁
- 科
- 砂
- 駅
- 佐
- 倒
- 遺
- 陽
- 城
- ゅ
- 晴
- 友
- 逮
- 息
- 練
- 整
- 迎
- 笑
- 郎
- 労
- 江
- 逃
- 米
- 境
- !
- 極
- 逆
- 河
- 暮
- 粛
- E
- 模
- 破
- 児
- 右
- 与
- 頑
- ゼ
- 費
- 久
- 抗
- 圧
- 韓
- 橋
- 武
- 罪
- 針
- 酒
- G
- 継
- 血
- 居
- 並
- 読
- 齢
- 価
- 授
- 芸
- 値
- 派
- 洗
- 史
- 池
- 沖
- 義
- 独
- 弁
- 頃
- 筋
- 奈
- 描
- 左
- 余
- 混
- 縄
- 盛
- K
- 銀
- 故
- 望
- 他
- 延
- 億
- 庭
- e
- 紙
- 管
- 妻
- 未
- 丸
- 背
- 散
- 絵
- ユ
- 底
- 造
- 幕
- 従
- 訴
- 製
- 爆
- 探
- 導
- 茶
- 裏
- 伸
- 委
- 簡
- 油
- 布
- 退
- 仲
- 毒
- 歴
- 怖
- 詳
- 因
- ヘ
- 救
- 遅
- 富
- 沢
- 額
- 絡
- 迫
- 展
- 浜
- 甘
- 枚
- 寝
- 遠
- 積
- 将
- 暴
- ぇ
- 図
- 催
- 閉
- 吸
- 販
- 嫌
- 瞬
- 異
- 傷
- 測
- 熊
- 伊
- 懸
- 洋
- 秋
- 春
- D
- 魚
- 袋
- 隠
- 争
- 幸
- 算
- 挑
- 緩
- H
- 星
- 短
- 岩
- 飯
- 遊
- 詰
- 巡
- 昔
- 養
- 豆
- 環
- 敗
- 診
- 坂
- 票
- 勤
- 巻
- 昼
- 許
- 刻
- 困
- 便
- 静
- 娘
- 服
- 床
- 課
- 衛
- 攻
- 板
- 婦
- 聴
- 弱
- 掛
- 雷
- M
- 換
- 司
- 秘
- 抑
- 角
- 濃
- 農
- 鉄
- 片
- 節
- 夢
- 崩
- 伺
- 塩
- 甲
- 津
- 装
- 里
- 志
- 華
- 停
- ぽ
- 骨
- 徴
- 監
- J
- 及
- 候
- 処
- 植
- 悩
- B
- 尾
- 伴
- 致
- 豊
- 痛
- 摘
- 印
- V
- 夕
- 払
- 踏
- 陸
- 徹
- 林
- 浮
- 修
- 折
- 岸
- 忘
- 壊
- 震
- 列
- 菅
- 責
- 繰
- 倉
- 幅
- 討
- 瀬
- 替
- 羽
- 喜
- ぷ
- 康
- 距
- 刺
- 猛
- 級
- 評
- 適
- 卵
- 端
- 泊
- 草
- 条
- 巨
- 推
- 幹
- 織
- 般
- 豪
- 標
- 財
- 述
- 驚
- 謝
- 隊
- 臨
- 吹
- 揚
- 盗
- 鳥
- 刑
- 依
- 興
- 跡
- 煮
- 泉
- 庫
- 範
- 狙
- 途
- L
- 賞
- 納
- 訳
- 腕
- 障
- 埼
- 束
- 辞
- 粉
- 舗
- 渋
- 旬
- 固
- 房
- 酸
- 順
- 液
- 抱
- 寺
- 棋
- 鹿
- 拠
- 礼
- 兄
- 清
- 留
- 老
- 脳
- 雑
- 牛
- 湿
- 惑
- 隣
- 貴
- 兵
- 載
- 税
- 複
- 躍
- 競
- k
- ,
- 携
- 怒
- 紀
- 浸
- 源
- 弾
- 輩
- ゾ
- 禁
- 御
- 湯
- 蔵
- 栄
- 炎
- 秒
- 握
- 否
- 敷
- 駄
- 似
- 希
- 閣
- 傾
- 壁
- 欲
- 沿
- 薄
- 編
- i
- 窓
- 奪
- 盤
- 群
- 皮
- a
- 劇
- 敵
- 借
- 獲
- 精
- 魔
- 隔
- 丁
- 猫
- 包
- 維
- 脱
- 系
- 陰
- 縮
- 犬
- 批
- 等
- 航
- 督
- 聖
- 乱
- 宇
- 濫
- 箱
- 干
- 氾
- 了
- 賀
- 永
- 迷
- 審
- 脚
- 毛
- 締
- 免
- 徒
- 塁
- 稿
- 析
- 恵
- u
- 律
- 恋
- 徳
- 看
- 控
- 札
- 堂
- 糖
- 竹
- 弟
- 互
- U
- 採
- c
- 蒸
- 鳴
- 央
- 勉
- 雪
- 胸
- 豚
- 慣
- 捨
- 属
- 漁
- 磨
- 欠
- 就
- 臓
- 傘
- 衝
- 露
- 襲
- 操
- 副
- r
- 層
- 麻
- 腹
- 英
- 殿
- 索
- 肺
- 那
- 闘
- 季
- 疫
- 黄
- 誕
- 募
- W
- 衣
- 魅
- 駆
- 築
- 乾
- 克
- 飼
- 裕
- 浴
- 氷
- 秀
- 浦
- 為
- 令
- 博
- 純
- 慢
- 到
- 輪
- 穴
- 照
- 漢
- 陣
- 冬
- 善
- 才
- 辛
- 眠
- 寿
- 邪
- 繁
- 荷
- 汚
- 樹
- 冠
- 功
- Y
- 快
- 講
- 凍
- 暗
- 悲
- ぴ
- 至
- 乳
- 疲
- 肩
- 矢
- t
- 昭
- 姉
- 鈴
- 菌
- 寒
- 償
- 輝
- 歯
- 捉
- 融
- 誘
- 呂
- 圏
- 均
- 泣
- 澤
- 之
- 仮
- 浅
- 創
- 祭
- 鎖
- 奇
- 荒
- 則
- 汁
- 封
- 湾
- 俳
- 措
- 枝
- 咲
- 怪
- 宝
- 滞
- 徐
- 昇
- 柄
- 句
- 覆
- 耳
- 占
- 雇
- 購
- 削
- 腰
- 遣
- 透
- 尻
- 糸
- 奏
- g
- 胞
- 撲
- 塗
- 廃
- 汗
- 珍
- 損
- 株
- 也
- 卒
- 燃
- 揺
- 尽
- 兆
- 掲
- 革
- 。
- 諸
- 柳
- ヴ
- F
- 択
- 慎
- 烈
- 飾
- 髪
- 射
- 憶
- 幼
- 署
- 衆
- 畑
- 妙
- 緑
- 肌
- 炒
- 匠
- 埋
- 充
- 杉
- 軒
- n
- 扱
- 菓
- 竜
- 騒
- 虫
- 皇
- 漫
- 剤
- 桜
- 承
- 添
- 翌
- 掃
- 麺
- 梨
- 契
- 棒
- 随
- 脂
- 訓
- 儀
- 岐
- 童
- 旧
- 涼
- 硬
- 努
- 溶
- 杯
- 慮
- 託
- 泳
- 狭
- 較
- ▁
- 暖
- 湖
- 欺
- 銃
- 謎
- 脇
- 脅
- 匹
- 勇
- 輸
- 略
- 酢
- 貼
- 詐
- 妹
- 潮
- 僚
- ヌ
- 柔
- 憲
- 漬
- 悔
- 駐
- 誤
- 踊
- 鍵
- 潜
- 鑑
- 縁
- 趣
- 雄
- 旦
- 鍋
- 炊
- 雰
- 沼
- 遭
- 誇
- 貸
- 搬
- 絞
- 偽
- 鶏
- 麦
- 斜
- 滅
- 晩
- 鼻
- 彩
- 宙
- 筆
- 披
- 腐
- 核
- 芝
- 摩
- 恩
- 勧
- 勘
- 績
- 焦
- 益
- 靴
- 仏
- 威
- 祖
- 稼
- 忙
- 乃
- 巣
- l
- 郷
- 罰
- 侵
- 版
- 諦
- 嶋
- 易
- 誌
- 黙
- 曇
- 濯
- 掘
- 塚
- 是
- 釣
- 悟
- 孫
- 己
- 棄
- 招
- 玄
- 嘘
- 賛
- 茨
- 煙
- 穫
- 栃
- 唯
- 龍
- 称
- 祝
- 鏡
- 宗
- 紫
- 預
- 拭
- 柱
- 執
- 茂
- 翔
- 賃
- 尿
- 縫
- 縦
- 盆
- 炭
- 斉
- 肝
- 奴
- 揮
- 稲
- 恥
- 券
- 鬼
- s
- 幌
- 微
- 潰
- 滑
- 紅
- 腸
- 渉
- 皿
- b
- 寧
- 阿
- 仙
- 釈
- 粒
- 堀
- 須
- 泥
- 眼
- 還
- 沙
- 典
- 覇
- 詞
- 僅
- 舎
- 阜
- 耐
- .
- 禍
- 需
- 滝
- 剣
- 潟
- 沈
- 把
- 孤
- 缶
- 隙
- 鍛
- 涙
- 撤
- 培
- 刀
- 網
- 畿
- 械
- 貨
- 伎
- 拝
- 脈
- ぺ
- 棟
- 排
- 牧
- 吐
- 栗
- 彦
- 紋
- 智
- 繊
- 挟
- h
- 坊
- 鶴
- 祈
- 粘
- 噴
- 駒
- 畳
- 献
- 殊
- 召
- 臭
- 憧
- 刃
- 殖
- 綱
- 項
- 姫
- 裂
- 伏
- 褒
- 励
- 桃
- 桁
- 贈
- 奄
- 痕
- 促
- 懐
- 霊
- 跳
- 敬
- 熟
- 砲
- 履
- 銭
- 忍
- 垣
- 尊
- 帝
- 刊
- 倫
- 丼
- ゥ
- 漏
- 膨
- Q
- 藩
- 墓
- 陥
- 屈
- 偉
- 没
- 亜
- 邸
- 栽
- 殴
- 章
- 兼
- 菊
- 隆
- 寂
- 腎
- 癒
- 郵
- 遂
- 卓
- 塞
- 筒
- 貧
- 籍
- 喫
- 麗
- 岳
- 崖
- 斎
- ヶ
- 懲
- 盟
- 既
- 冒
- 嫁
- 偶
- X
- 燥
- 芽
- y
- 奉
- 椅
- 妊
- 疾
- 嬉
- 狩
- 旗
- 拒
- 肥
- 犠
- 誠
- 即
- 帳
- 穏
- 聡
- 牲
- 濁
- 班
- 哲
- 奮
- 昆
- 彫
- 浪
- 却
- 亀
- 往
- 雅
- 軟
- 祉
- 貫
- 暇
- 拍
- 沸
- 幻
- 睡
- 閥
- 稽
- 譲
- w
- 顧
- 笠
- 嬢
- 拳
- 扇
- 酔
- 抵
- 佳
- 弘
- 唐
- 緯
- 穂
- 嵐
- 隅
- 璧
- 慰
- 稚
- 唱
- 猿
- 欧
- 拓
- 恨
- 孝
- 著
- 礎
- 滋
- 葬
- 枠
- 堤
- 湧
- 併
- 吾
- 爽
- d
- 詩
- 虐
- 軸
- 灯
- 拘
- 序
- 舌
- 餌
- 銅
- 拾
- 旨
- 霧
- 仁
- 鎌
- 尋
- 噂
- 綾
- 弥
- 艦
- 峰
- 拉
- 貯
- 溝
- 叫
- 葛
- 丹
- 棚
- 袖
- 淡
- 敏
- v
- 鋭
- 酬
- 肢
- 繋
- 慌
- 忠
- 俊
- 双
- 爪
- 鵬
- 帽
- 憩
- 瓶
- 賭
- 闇
- 酵
- 頻
- 慶
- 垂
- 塾
- 桂
- 紗
- 惜
- 泡
- 桐
- 鴨
- 衰
- 膝
- 奨
- 酷
- 醤
- 祐
- 鐘
- 洪
- 斬
- 灰
- 歓
- 妨
- 堅
- 粗
- 遇
- 曽
- 概
- 噌
- 賢
- 膜
- 胃
- 貿
- Z
- 如
- 廷
- 芋
- 凝
- 泰
- 餅
- 凶
- 荘
- 膚
- 貝
- 苗
- 塔
- 釜
- 昌
- 漂
- 朗
- 癖
- 殻
- 柴
- 祥
- 粧
- 据
- 琴
- 扉
- ぃ
- 狂
- 甚
- 壇
- 冊
- 箸
- 洞
- 仰
- ?
- 浄
- 邦
- 俵
- 貢
- 篠
- 窮
- 鉢
- 晶
- 寸
- 亭
- 蓄
- 挨
- 拶
- 賄
- 謀
- ぉ
- 誉
- 浩
- 債
- 刈
- 蛇
- 薦
- 径
- p
- 剥
- 獄
- 寅
- 沫
- 潔
- 輔
- 伯
- 架
- 戚
- 餃
- 累
- 珠
- 妖
- 丘
- 偵
- 濱
- 萩
- 剛
- 籠
- 辻
- 胆
- 哉
- 塊
- 芦
- 魂
- 喚
- 眺
- 槽
- 磯
- 潤
- 唾
- 偏
- 括
- 刷
- 枕
- 郊
- 肪
- 錦
- 幾
- 机
- 宴
- 梗
- 錯
- 詠
- 縛
- 摂
- 蓮
- ℃
- 桑
- 阻
- 裸
- 逸
- 僧
- 墨
- 尚
- 賠
- 芳
- 鼓
- 巧
- 騰
- 蓋
- 惨
- 筑
- 飽
- 冗
- 践
- 蹴
- 媛
- 壮
- 絆
- 紛
- 拐
- 窃
- 綿
- 悼
- 誓
- 卸
- 羅
- 抽
- 怠
- 簿
- 亮
- 謙
- 痩
- 廊
- 澄
- 娠
- 粋
- 叔
- 梶
- 迅
- 旭
- 栓
- 劣
- 誹
- 謗
- 郡
- 渦
- 柿
- 獣
- 柏
- 恒
- 枯
- 串
- 函
- 嗅
- 遮
- 虎
- 股
- 芯
- 斗
- f
- 猶
- 叱
- 匂
- 洲
- 征
- 砕
- 秩
- 寮
- 暦
- 漠
- 鉛
- 蜜
- 循
- 虚
- 彰
- 禅
- 嘉
- 啓
- 晃
- 剰
- 翻
- 鎮
- 銘
- 贅
- 嶽
- 惧
- 憎
- 妥
- 疎
- 碁
- 盾
- 弊
- 虹
- 鉱
- 樽
- 溺
- 讐
- 儲
- 侍
- 搭
- 庄
- 薫
- 宏
- 尖
- 肘
- 騙
- 椎
- 耕
- 涯
- 漆
- 陛
- 寛
- 篤
- 愚
- 堪
- 辰
- 符
- 陳
- 藍
- 諮
- 磁
- 謡
- 窟
- 蕎
- 弓
- 蝶
- 悠
- 宛
- 茎
- 訟
- 杏
- 圭
- 滴
- 唆
- 峡
- 薩
- 鷹
- 庶
- 膳
- 轄
- 陶
- 汰
- 剖
- 腫
- 李
- 隷
- 乏
- 斐
- 顕
- 笹
- 痴
- 礁
- 煎
- 淳
- 朽
- 蒲
- 賊
- 軌
- 鷲
- 麟
- 笛
- 矛
- 厄
- 准
- 揃
- 藻
- 瞳
- 翼
- 瓦
- 渓
- 騎
- 廣
- 匿
- 幽
- 檜
- 又
- 隈
- 欄
- 勾
- 瑠
- 抹
- 箇
- 朱
- 磐
- 蘭
- 峠
- 俗
- 傍
- 喉
- 蜂
- 湘
- 隻
- 該
- 牟
- 憂
- 擦
- 糧
- 譜
- 畠
- 絹
- 濡
- 巾
- 傑
- 俣
- 庵
- 妃
- 虻
- 舟
- 乙
- 麒
- 貞
- 琉
- 眉
- 瀧
- 蚊
- 呈
- 堺
- 坪
- 遍
- 舘
- 醸
- 哀
- 郭
- 妄
- 銚
- 蘇
- 諾
- 錠
- 稜
- ぅ
- 榊
- 噛
- 國
- 赴
- 踪
- 妬
- 鈍
- 骸
- 齋
- 腺
- 醒
- 貌
- 騨
- 朴
- 厨
- 弄
- 凡
- 羊
- 卑
- 壌
- 椿
- 唇
- 鳳
- 帆
- 邉
- 條
- 霞
- 搾
- 擁
- 荻
- 駿
- 朋
- 渇
- 詫
- 叶
- 后
- 櫻
- 叩
- 窯
- 瘍
- 箕
- 閑
- 猪
- 璃
- 紺
- 酎
- 侶
- 佑
- 瑞
- 莉
- 墳
- 敢
- 嚇
- 宰
- 綻
- 萌
- 涌
- 嫉
- 冨
- 泌
- 碑
- 渕
- 塀
- 雌
- 茅
- 挫
- 麓
- 宜
- 訂
- 頬
- 稀
- 壺
- 苔
- 髄
- 鮫
- 嘆
- 堆
- 慈
- 愉
- 鯛
- 菱
- 吟
- 裾
- 坐
- 捧
- 飢
- 零
- 槻
- 弦
- 婆
- 赦
- 喪
- 唄
- 伐
- 尺
- 桶
- 呪
- 薙
- 樋
- 臀
- 掌
- 蛮
- 眞
- 悦
- 牙
- 喝
- 霜
- 毅
- 畜
- 玲
- 煩
- 砥
- 靖
- 釧
- 惚
- 諭
- 辱
- 淵
- 崇
- 幡
- 斑
- 鴎
- 旋
- 胴
- 填
- 恭
- &
- 酌
- z
- 諫
- 梯
- 鉾
- 楓
- 硫
- 栞
- 姓
- 暫
- 謹
- 臼
- 鵜
- 穀
- 槇
- 醍
- 醐
- 碗
- 椒
- 蚕
- 隕
- 炉
- 窒
- 閲
- 慕
- 繕
- 瞭
- 肛
- 懇
- 呉
- 痢
- 茹
- 蛍
- 邊
- 猟
- 喧
- 岬
- 脊
- 婿
- 柚
- 縣
- 附
- 奔
- 凸
- ぢ
- 榎
- 鳩
- 頓
- 梁
- 凹
- 釘
- 皐
- 遥
- 噺
- ヲ
- 瑛
- 彬
- 曹
- 砦
- 侮
- 芥
- 狐
- 碧
- 摯
- 淀
- 盲
- 窪
- 薪
- 萎
- 鋸
- 臆
- 蒼
- 栖
- 肯
- 敦
- 賓
- 晋
- 昴
- 羨
- 陵
- 喋
- 苑
- 憾
- 爵
- 餓
- 轟
- ヱ
- 哺
- 嘩
- 蔽
- 洛
- 娯
- 鋼
- 隼
- 襟
- 凛
- 挽
- 媒
- 戴
- 憤
- 昧
- 橘
- 應
- 緻
- 墜
- 塙
- 訣
- 紳
- 牽
- 匡
- 鍾
- 琵
- 琶
- 宵
- 鮭
- 挿
- 囚
- 硝
- 禄
- 忌
- 遡
- 疹
- x
- 睦
- 賂
- 暁
- 葵
- 冤
- 杖
- 弔
- 燻
- 麹
- 獅
- 棲
- 讃
- 姻
- 詮
- 鷺
- 訊
- 胎
- 藝
- 戯
- 擢
- 旺
- 菩
- 厘
- 某
- 贋
- 飴
- 屏
- 顎
- 壕
- 鮎
- 枢
- 魁
- 紘
- 蔓
- 茉
- 嘱
- 汽
- 怯
- 擬
- 耗
- 紡
- 姜
- 倶
- 樫
- 嚥
- 箔
- 箋
- 吊
- 牢
- 勃
- 碇
- 叡
- 毀
- 遼
- 稔
- 芹
- 萬
- 茜
- 諏
- 杜
- 孔
- 烏
- −
- 萱
- 丑
- 勲
- 牡
- 埴
- 玖
- 壱
- 耶
- 檻
- 巳
- 幣
- 佃
- 瓜
- 雛
- 棺
- 拙
- 糾
- 遷
- 姑
- 播
- 痘
- 榛
- 賜
- 曖
- 忖
- 肖
- 柵
- 峯
- 罠
- 郁
- 絢
- 蔡
- 采
- 惹
- 藪
- 祓
- 桟
- 廉
- 楼
- 腔
- 躊
- 躇
- 挺
- 秦
- 蟹
- 冥
- 楢
- 錬
- 托
- 澪
- 遽
- 訃
- 絨
- 呑
- 渚
- 倦
- 狼
- 茸
- 籔
- 忽
- 咽
- ヵ
- 槙
- 陀
- 紐
- 雀
- 揉
- 迦
- 槍
- 汐
- 拗
- 鏑
- 鐸
- 浙
- 1
- 或
- 峙
- 甥
- 尉
- 胡
- 瞼
- 掴
- 茄
- 蒙
- 脆
- 惠
- 宍
- 湊
- 畔
- 逐
- 灘
- 冶
- 鰻
- 艇
- 艶
- 此
- 錫
- 彌
- 滉
- 尼
- 琢
- 屯
- 髭
- 嗣
- 逗
- 詣
- 芙
- 喰
- 蠣
- 庸
- 鋒
- 惣
- 吠
- 漱
- 升
- 綺
- 倣
- 享
- 慨
- 劉
- 凱
- 檀
- 輿
- 苫
- 蜘
- 蛛
- 鋳
- 莫
- 虜
- 垢
- 渾
- 琥
- 鎧
- 乞
- 狗
- 逝
- 捻
- 祇
- 帥
- 珀
- 罵
- ヅ
- 溪
- 嶺
- 祷
- 宕
- 怨
- 狛
- 餡
- 錮
- 鞄
- 頸
- 醜
- 鬱
- j
- 馳
- 翠
- 芭
- 蕉
- 腿
- 鴻
- 麩
- 柘
- 炙
- 釉
- 愁
- 凌
- 樺
- 冑
- 巌
- 膏
- 腱
- 妓
- 峨
- 羹
- 叉
- 丞
- 疱
- 2
- 酪
- 萢
- 毯
- 侑
- 爺
- 痺
- 杭
- 夷
- 鼠
- 嵯
- 袴
- 淑
- 鯉
- 遜
- 泄
- 剪
- 韻
- 侯
- 來
- 拷
- 但
- 饅
- 竿
- 荼
- 蜷
- 腑
- 帖
- 苺
- 刹
- 衡
- 坑
- 鯖
- 凄
- 俸
- 薗
- 扮
- 瀕
- 桔
- 聯
- 廻
- 颯
- 胤
- 諄
- 冴
- 溢
- 鱗
- 董
- 楠
- 柑
- 與
- 縞
- 堵
- 薮
- 暉
- 逢
- ヂ
- 舶
- 媚
- 些
- 迂
- 熾
- 蔭
- 會
- 掟
- 嫡
- ×
- 挾
- 焙
- 卯
- 曼
- 梢
- 蓬
- 廟
- 叙
- 藁
- 歪
- 瘤
- 渥
- 舵
- 咳
- 哨
- 莱
- 煌
- 庇
- 囃
- 磋
- 榮
- 狸
- 堕
- 顆
- 溜
- 肋
- 耀
- ☆
- 罹
- 爛
- 瞑
- 櫛
- 喩
- 怜
- 焚
- 矯
- 瘡
- 姪
- 烹
- 弧
- 廠
- 捗
- 卜
- 覗
- 駕
- 這
- /
- 而
- 倹
- 屑
- 厩
- 5
- 徘
- 徊
- 蟷
- 螂
- 謳
- 仇
- 梱
- 伽
- 鞘
- 槌
- 崔
- β
- 黛
- 玩
- 櫓
- 俯
- 瞰
- 繭
- 蔑
- 涸
- 舫
- 廿
- 瞽
- 屁
- 糞
- 鰹
- 舐
- 箭
- 兎
- 疆
- 巷
- 楚
- 繍
- 靡
- 辣
- 貪
- 餐
- 恣
- 曾
- 慧
- 揶
- 揄
- ∞
- 諜
- 憑
- 褐
- 珂
- 錣
- ’
- 亘
- 樂
- 椋
- 馴
- 凪
- 鐵
- 薊
- 勅
- 舛
- 寵
- 塵
- 蔦
- 昏
- 漸
- 摺
- 遙
- 姦
- 寡
- 暢
- ‐
- 鞍
- 欽
- 痔
- 祀
- 呆
- 伜
- 粕
- 櫨
- 醇
- 浬
- 炸
- 饉
- 剃
- 梠
- 邱
- 鞭
- 諒
- 咎
- Ⅱ
- 喬
- 几
- 燕
- 晒
- 蝦
- 毘
- 坦
- 杢
- 仔
- 楊
- 斥
- 宋
- 肴
- 洩
- 云
- 捏
- 閃
- 稟
- 墾
- 堰
- 甫
- 韮
- 昂
- 杵
- 凰
- 巴
- 笏
- 懺
- 瀑
- 劫
- 漕
- 4
- 畝
- 扶
- 賑
- 灸
- 齊
- 靭
- =
- 燗
- 衷
- 蛭
- 蟻
- 粟
- 悶
- 槿
- 愕
- 顛
- 兜
- 彷
- 楕
- 檎
- 桧
- 糠
- 麿
- 蝋
- 扁
- 筧
- 氈
- 焔
- 逼
- 儒
- 倖
- 團
- 肇
- 錆
- 彿
- 欅
- 櫂
- 撫
- 疋
- 惇
- 秤
- 膠
- ヮ
- 粥
- 斡
- 鯨
- 糊
- 淋
- 攪
- 勒
- 戌
- 嘔
- ゐ
- 梓
- 紆
- 淞
- 傲
- 撒
- 斤
- 詈
- 嗜
- 鴉
- 鰺
- 卿
- 鄭
- 膀
- 胱
- Ⅰ
- 冲
- 闊
- 憫
- 斧
- 鴫
- 肱
- 奢
- 伶
- 凜
- 凧
- 珈
- 琲
- 殉
- 傭
- 膿
- 酩
- 酊
- 轢
- 蝉
- 甜
- 曳
- 靱
- 畏
- 沓
- 崗
- 只
- 琳
- 艘
- 巫
- 侘
- 洸
- 假
- 螺
- 蕨
- 煽
- 穢
- 誨
- 撼
- 鮨
- 呵
- 閻
- ○
- 豹
- 甑
- 盧
- 箪
- 鱈
- 煤
- 總
- 裔
- 當
- 贖
- 椀
- 枡
- 綴
- 吻
- 燈
- 薔
- 薇
- 綬
- 捌
- 雁
- 賽
- 嬬
- 藉
- 貰
- 砺
- 曙
- 蒔
- 淫
- 倅
- 伍
- 嬌
- 笙
- 穿
- 櫃
- 牌
- 拌
- 纂
- 悸
- 乖
- 鈿
- 菰
- 孵
- 汎
- 灼
- 遵
- 軋
- 桝
- 臥
- 撥
- 柊
- 疇
- 惰
- 梧
- 0
- 學
- 鰤
- 睨
- 眈
- 錐
- 菖
- 濤
- 拮
- 珊
- 瑚
- 蛙
- 罷
- 爬
- 憚
- 翁
- 邑
- 珪
- 轆
- 轤
- 蛾
- 臈
- 壽
- 謁
- 3
- 贔
- 屓
- 裟
- 陪
- 憐
- 笥
- 7
- 矜
- 邁
- 迭
- 屍
- 孟
- 梼
- 8
- 侃
- 諤
- 咋
- 舅
- 諌
- 蝸
- 雍
- ♪
- 嶌
- 簀
- 雫
- 涛
- 沌
- 杓
- 輌
- 舩
- 彙
- 閖
- ゑ
- 汝
- 巽
- 圓
- 娩
- 犀
- 於
- 葺
- 捺
- 棘
- 穣
- 韋
- 襖
- 徽
- 鳶
- 笘
- 戊
- 〇
- 禰
- 欣
- 癌
- 嘗
- 箒
- 狡
- 篭
- 侠
- 煉
- 魯
- 恫
- 佇
- @
- 蓑
- 氣
- 濠
- 硯
- 絣
- 亞
- 墟
- 洒
- 襦
- 袢
- 緋
- 宥
- 寇
- 昵
- 爾
- 窄
- 憔
- 悴
- 鰯
- 蝿
- 苛
- 霹
- 靂
- 筍
- 濾
- 窩
- 嵜
- 朦
- 朧
- 毬
- 圀
- 吏
- 咤
- 汲
- 傳
- 礒
- 饒
- 麾
- 蝮
- 截
- 祠
- 蟄
- 趙
- 僭
- 蒟
- 蒻
- 腓
- 訛
- 苧
- 猾
- 藏
- 儚
- 葦
- 彗
- 葡
- 萄
- 蛤
- 蹄
- 噪
- 匕
- 榴
- 姶
- 齟
- 齬
- 蜥
- 蜴
- 沃
- 褄
- 獺
- 撓
- 椰
- 裳
- 痍
- 套
- 擲
- 鞜
- 汀
- 涎
- 饗
- 袈
- 箝
- 鸞
- 漉
- 薯
- 訶
- 曰
- :
- 脛
- 咀
- 嚼
- 牝
- 匁
- 瑕
- 疵
- 瀞
- 胚
- 鋲
- 撰
- 蕩
- 竪
- 煕
- 恕
- 翡
- 簾
- 塹
- 卦
- 膵
- 籾
- 鑽
- 鱧
- 猊
- 娼
- 俄
- 俎
- 9
- 癪
- 揖
- 峻
- 苅
- 其
- 紬
- 眩
- 黎
- 鉉
- 埒
- 竣
- 湛
- 楮
- 賦
- 妾
- 偲
- 偕
- 舜
- 謐
- 惟
- 焉
- 跛
- 輻
- 款
- 匈
- 喘
- 殲
- 祟
- 勿
- 髏
- 鋪
- 綜
- 殷
- 埜
- 斯
- 稗
- 辿
- 躾
- 淺
- 榜
- 址
- 蹟
- 詭
- 佰
- Ω
- 塑
- 姐
- 奎
- 誅
- 儂
- 杞
- 沐
- 幇
- 唸
- 瀟
- 嵩
- 囁
- 僑
- 呟
- 鰐
- 縷
- 炬
- 燵
- 梵
- 倭
- 竈
- 疏
- 禊
- 晰
- 蔀
- 鍬
- 瀾
- 貶
- 愴
- 撹
- 荏
- 髑
- 奧
- 猩
- 禎
- 褪
- 廓
- 笈
- 晦
- 吽
- 實
- 埠
- 嗟
- 蕭
- 戮
- q
- 6
- 燐
- 蓼
- 捷
- 亨
- 鮑
- 雉
- 羲
- 漣
- 嚢
- 緘
- 晟
- 掻
- 頌
- 纐
- 纈
- 允
- 獨
- +
- 抄
- 猥
- 祢
- 沁
- 柾
- 閤
- 滔
- 蘰
- 鹸
- 疽
- 慄
- 麝
- 磔
- 聰
- 瓢
- 崑
- 纏
- 嵌
- 鍔
- 衿
- 鶯
- 垓
- 鞠
- 戎
- 圃
- 鯵
- 襴
- 鍮
- 扼
- 劾
- 匙
- 掬
- 澁
- 燿
- 苹
- 葱
- 遁
- 楷
- 佞
- 痒
- 厠
- 鉦
- 迄
- 揆
- 辯
- 襄
- 酉
- 癇
- 狽
- 茱
- 萸
- 凋
- 盂
- 頷
- 跣
- 夭
- 寉
- 袂
- 框
- 魍
- 魎
- 傅
- 憊
- 甦
- 拵
- 瀉
- 諍
- 熙
- 嚴
- 鍼
- 歎
- 謄
- 弯
- 碕
- 筌
- 虞
- 慙
- 愧
- 桓
- 佼
- 鵑
- 蕪
- 鵠
- 毫
- 筈
- 莢
- 燦
- 蹂
- 躙
- 膣
- 淘
- 厭
- 巖
- 檄
- 緞
- 辟
- 痰
- 獰
- 橙
- 恰
- 佛
- 弛
- 贄
- 悉
- 鸚
- 朔
- 譚
- 泪
- 穎
- 糺
- 畦
- 茫
- 簑
- 聘
- 埃
- 逓
- 潭
- 熨
- 咄
- 庚
- 嬪
- 涜
- 譽
- 踵
- 駱
- 奸
- 攘
- 榑
- 黌
- 聚
- 甕
- 偈
- 尭
- 拿
- 柯
- 隋
- 魑
- 芻
- 岱
- 烙
- 竺
- 鼈
- 簗
- 頒
- 馗
- 閏
- 羞
- 褥
- 馨
- 邂
- 逅
- 脩
- 鉈
- 姥
- 檮
- 蜃
- 糀
- 臂
- 麥
- 漿
- 憺
- 渤
- 匝
- 瑳
- 聲
- 滿
- 澱
- 塘
- 礫
- 對
- 蚤
- 鬆
- 諧
- 拈
- 盃
- 陝
- 掩
- 闍
- 餉
- 皓
- 諷
- 晏
- 赳
- 薛
- 暈
- 猜
- 驕
- 偸
- 瑶
- 皺
- 湫
- 鮪
- 縋
- 筵
- 鞋
- 愼
- 蠅
- 〆
- 訥
- 蜀
- 瞥
- 窈
- 吃
- 箏
- 鷽
- 逞
- 嘲
- 旛
- 琺
- 瑯
- 蕾
- 蕃
- 尤
- 將
- 臍
- 游
- 盥
- 楳
- 礬
- 婢
- 魏
- 滲
- 撚
- 鑼
- 飄
- 悛
- 駝
- 魄
- 賤
- 檸
- 檬
- 蟠
- 鉤
- 寶
- 蜻
- 蛉
- 蕗
- 銑
- 跨
- 壬
- 撻
- 茗
- 椹
- 寓
- 矧
- 篁
- 簪
- 叢
- 趨
- 羂
- 偃
- 橿
- 栂
- 亢
- 蕁
- μ
- 囂
- 臑
- 皷
- 歇
- 姨
- 悍
- 孜
- 篇
- 樸
- 矮
- 夛
- 矩
- 瑣
- 齧
- 畷
- 衾
- 瞞
- 刎
- 舷
- 莞
- 綵
- 翳
- 瘻
- 跋
- 耘
- 旁
- 刮
- 燭
- 灣
- 磊
- 嘶
- 禽
- 脾
- 靜
- 壟
- 栩
- 虱
- 乍
- 疸
- 禮
- 霍
- 囮
- 沽
- 詢
- 耆
- 籐
- 詛
- 、
- 絃
- 兒
- 譬
- 壷
- 禿
- 庖
- 闢
- 妍
- 按
- 坤
- 趾
- 屠
- 籏
- 轍
- 瑤
- 勸
- 寳
- 梳
- 梃
- 棠
- 艱
- 劔
- 蚯
- 蚓
- 僻
- 啄
- 赭
- 緡
- 痙
- 攣
- 瑩
- 勗
- 瀋
- 袱
- 摸
- 掠
- 颪
- 錘
- 痣
- 眸
- ゝ
- 耽
- 饌
- 鮒
- 碾
- 浚
- 渫
- 且
- 衲
- 筅
- 蹊
- 誂
- 疔
- 詔
- 嗚
- 椨
- 馥
- 滸
- 賎
- 殆
- 灌
- 暹
- 躯
- 喃
- 蔟
- 媼
- 蟇
- 泙
- 癬
- 繚
- 聊
- 藹
- 盒
- 嘴
- 尹
- Σ
- 柩
- 蔚
- 蓉
- 渠
- 掣
- 寥
- 狄
- 傀
- 儡
- 悌
- 恬
- 樒
- 涅
- 槃
- 徨
- 滌
- 藺
- 睥
- 孺
- 蠕
- 筥
- 呻
- 緬
- 睾
- 堯
- 瓔
- 珞
- 燼
- 鶸
- 筐
- 碓
- 岨
- 襷
- 蝕
- 璋
- 襞
- 鵯
- 俑
- 癰
- 彪
- 籬
- 辨
- 粂
- 迸
- 僥
- 簸
- 閂
- 娶
- 牆
- 筰
- 綽
- 菫
- 蛸
- 銕
- 鞨
- 狢
- 啖
- 忸
- 怩
- 耄
- 碌
- 滂
- 沱
- 芒
- 韜
- 曝
- 痂
- 襤
- 褸
- 釋
- 艤
- 搏
- 璽
- 慟
- 哭
- 衞
- 聾
- 憬
- 愈
- 讓
- 綯
- 繹
- 瑾
- 鐡
- 夥
- 柢
- 笊
- 已
- 抛
- 舳
- 鎚
- 逡
- 鬨
- 滾
- 戍
- 卉
- 篷
- 狆
- 浣
- 闖
- 鑞
- 箙
- 瞋
- 癡
- 邏
- ★
- 訝
- 駈
- 戈
- 錺
- 諚
- 縒
- 斂
- 褶
- 鑚
- 侭
- 淨
- 悵
- 虔
- 棹
- 泯
- 蛆
- 孀
- 瓏
- 楯
- 舁
- 榕
- 覺
- 蒋
- 筺
- 蝗
- 坩
- 堝
- 楔
- 證
- 胛
- 怙
- 丙
- 窺
- 罔
- 儘
- 掖
- 棊
- 仗
- 炯
- 專
- 扈
- 鞆
- 咸
- 鰆
- 凭
- 芍
- 牒
- 幟
- 狒
- 絲
- 聟
- 唖
- 燧
- 頚
- Ⅶ
- 啼
- 鯱
- 觜
- 縊
- 瑜
- 旱
- 咬
- 籃
- 袍
- 敲
- 恍
- 慾
- 愾
- 杲
- 繻
- 搗
- 褻
- 栢
- 矍
- 鑠
- 觸
- 獏
- 弼
- 疚
- 斟
- 鮃
- <sos/eos>
init: chainer
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: false
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_jp_char_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
decoder: rnn
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 1024
sampling_probability: 0.0
att_conf:
atype: location
adim: 1024
awin: 5
aheads: 4
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.9.9
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MarcBrun/ixambert-finetuned-squad-eu
|
MarcBrun
| 2022-02-23T20:21:21Z | 29 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"es",
"eu",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
language:
- en
- es
- eu
widget:
- text: "When was Florence Nightingale born?"
context: "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820."
example_title: "English"
- text: "¿Por qué provincias pasa el Tajo?"
context: "El Tajo es el río más largo de la península ibérica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinación hacia el suroeste, que se acentúa cuando llega a Portugal, donde recibe el nombre de Tejo.
Nace en los montes Universales, en la sierra de Albarracín, sobre la rama occidental del sistema Ibérico y, después de recorrer 1007 km, llega al océano Atlántico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m³/s. En sus primeros 816 km atraviesa España, donde discurre por cuatro comunidades autónomas (Aragón, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y Cáceres)."
example_title: "Español"
- text: "Zer beste izenak ditu Tartalo?"
context: "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote."
example_title: "Euskara"
---
# ixambert-base-cased finetuned for QA
This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions.
## Overview
* **Language model:** ixambert-base-cased
* **Languages:** English, Spanish and Basque
* **Downstream task:** Extractive QA
* **Training data:** Experimental SQuAD1.1 in Basque
* **Eval data:** Experimental SQuAD1.1 in Basque
* **Infrastructure:** 1x GeForce RTX 2080
## Outputs
The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:
```python
{'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'}
```
## How to use
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "MarcBrun/ixambert-finetuned-squad-eu"
# To get predictions
context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820"
question = "When was Florence Nightingale born?"
qa = pipeline("question-answering", model=model_name, tokenizer=model_name)
pred = qa(question=question,context=context)
# To load the model and tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Hyperparameters
```
batch_size = 8
n_epochs = 3
learning_rate = 2e-5
optimizer = AdamW
lr_schedule = linear
max_seq_len = 384
doc_stride = 128
```
|
izzy-lazerson/wav2vec2-large-xls-r-300m-turkish-colab
|
izzy-lazerson
| 2022-02-23T19:31:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3866
- Wer: 0.3363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9949 | 3.67 | 400 | 0.7055 | 0.6984 |
| 0.4192 | 7.34 | 800 | 0.4530 | 0.4711 |
| 0.1987 | 11.01 | 1200 | 0.4319 | 0.4384 |
| 0.1317 | 14.68 | 1600 | 0.4332 | 0.4179 |
| 0.0988 | 18.35 | 2000 | 0.4201 | 0.3755 |
| 0.0791 | 22.02 | 2400 | 0.3968 | 0.3723 |
| 0.0628 | 25.69 | 2800 | 0.3998 | 0.3477 |
| 0.0501 | 29.36 | 3200 | 0.3866 | 0.3363 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
andresestevez/bert-base-cased-finetuned-squad
|
andresestevez
| 2022-02-23T19:12:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.13.3
- Tokenizers 0.10.3
|
vyang/plc2proc
|
vyang
| 2022-02-23T15:43:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
mwesner/bert-base-uncased
|
mwesner
| 2022-02-23T15:18:51Z | 17 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased
results: []
---
# bert-base-uncased
This model was trained on a dataset of issues from github.
It achieves the following results on the evaluation set:
- Loss: 1.2437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Masked language model trained on github issue data with token length of 128.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.205 | 1.0 | 9303 | 1.7893 |
| 1.8417 | 2.0 | 18606 | 1.7270 |
| 1.7103 | 3.0 | 27909 | 1.6650 |
| 1.6014 | 4.0 | 37212 | 1.6052 |
| 1.523 | 5.0 | 46515 | 1.5782 |
| 1.4588 | 6.0 | 55818 | 1.4836 |
| 1.3922 | 7.0 | 65121 | 1.4289 |
| 1.317 | 8.0 | 74424 | 1.4414 |
| 1.2622 | 9.0 | 83727 | 1.4322 |
| 1.2123 | 10.0 | 93030 | 1.3651 |
| 1.1753 | 11.0 | 102333 | 1.3636 |
| 1.1164 | 12.0 | 111636 | 1.2872 |
| 1.0636 | 13.0 | 120939 | 1.3705 |
| 1.021 | 14.0 | 130242 | 1.3013 |
| 0.996 | 15.0 | 139545 | 1.2756 |
| 0.9625 | 16.0 | 148848 | 1.2437 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.3
|
anantoj/wav2vec2-adult-child-cls
|
anantoj
| 2022-02-23T14:29:03Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-adult-child-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1713
- Accuracy: 0.9460
- F1: 0.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.323 | 1.0 | 96 | 0.2699 | 0.9026 | 0.9085 |
| 0.2003 | 2.0 | 192 | 0.2005 | 0.9234 | 0.9300 |
| 0.1808 | 3.0 | 288 | 0.1780 | 0.9377 | 0.9438 |
| 0.1537 | 4.0 | 384 | 0.1673 | 0.9441 | 0.9488 |
| 0.1135 | 5.0 | 480 | 0.1713 | 0.9460 | 0.9509 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
phongdtd/fb-youtube-vi-large
|
phongdtd
| 2022-02-23T13:56:55Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"phongdtd/youtube_casual_audio",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- phongdtd/youtube_casual_audio
- generated_from_trainer
model-index:
- name: fb-youtube-vi-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-youtube-vi-large
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the PHONGDTD/YOUTUBE_CASUAL_AUDIO - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
EleutherAI/enformer-preview
|
EleutherAI
| 2022-02-23T12:17:24Z | 10 | 5 |
transformers
|
[
"transformers",
"pytorch",
"enformer",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
inference: false
---
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
```
|
kurianbenoy/bert-finetuned-ner
|
kurianbenoy
| 2022-02-23T11:48:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9304777594728171
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9403929403929404
- name: Accuracy
type: accuracy
value: 0.9861070230176017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9305
- Recall: 0.9505
- F1: 0.9404
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0869 | 1.0 | 1756 | 0.0680 | 0.9174 | 0.9342 | 0.9257 | 0.9827 |
| 0.0334 | 2.0 | 3512 | 0.0620 | 0.9305 | 0.9470 | 0.9387 | 0.9853 |
| 0.0233 | 3.0 | 5268 | 0.0611 | 0.9305 | 0.9505 | 0.9404 | 0.9861 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
panashe/autonlp-eo-590516680
|
panashe
| 2022-02-23T11:29:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:panashe/autonlp-data-eo",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- panashe/autonlp-data-eo
co2_eq_emissions: 2.3709499644854883
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 590516680
- CO2 Emissions (in grams): 2.3709499644854883
## Validation Metrics
- Loss: 0.6466107964515686
- Accuracy: 0.6608695652173913
- Precision: 0.6515151515151515
- Recall: 0.7288135593220338
- AUC: 0.6334745762711864
- F1: 0.688
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/panashe/autonlp-eo-590516680
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("panashe/autonlp-eo-590516680", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("panashe/autonlp-eo-590516680", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
chrisknowles/en_stylecheck
|
chrisknowles
| 2022-02-23T11:05:56Z | 6 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_stylecheck
results: []
---
Check style on English text (currently passive text).
| Feature | Description |
| --- | --- |
| **Name** | `en_stylecheck` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner`, `stylecheck` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner`, `stylecheck` |
| **Vectors** | 684830 keys, 20000 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (115 labels for 5 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`senter`** | `I`, `S` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
| **`entity_ruler`** | `PASSIVE` |
</details>
|
Aron/distilbert-base-uncased-finetuned-emotion
|
Aron
| 2022-02-23T10:34:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9201604193183255
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2295
- Accuracy: 0.92
- F1: 0.9202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8187 | 1.0 | 250 | 0.3137 | 0.902 | 0.8983 |
| 0.2514 | 2.0 | 500 | 0.2295 | 0.92 | 0.9202 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bettertextapp/bart_large_teaser_de_v2
|
bettertextapp
| 2022-02-23T10:17:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bart_large_teaser_de_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_large_teaser_de_v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
{'eval_loss': 0.2028738558292389, 'eval_score': 80.750962016922, 'eval_counts': [342359, 316072, 304925, 294258], 'eval_totals': [376475, 371475, 366475, 361475], 'eval_precisions': [90.93804369480046, 85.08567198330978, 83.20485708438503, 81.40479977868456], 'eval_bp': 0.9490684186878129, 'eval_sys_len': 376475, 'eval_ref_len': 396155, 'eval_runtime': 431.9447, 'eval_samples_per_second': 11.576, 'eval_steps_per_second': 0.363}
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
junzai/demo
|
junzai
| 2022-02-23T08:22:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert_finetuning_test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8284313725490197
- name: F1
type: f1
value: 0.8817567567567567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_finetuning_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8284
- F1: 0.8818
- Combined Score: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-finetuned-weaksup-10000
|
cammy
| 2022-02-23T06:35:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-10000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6031
- Rouge1: 28.3912
- Rouge2: 13.655
- Rougel: 22.287
- Rougelsum: 25.4794
- Gen Len: 67.995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 1.2991 | 1.0 | 10000 | 1.6031 | 28.3912 | 13.655 | 22.287 | 25.4794 | 67.995 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Santiagot1105/wav2vec2-lar-xlsr-es-col
|
Santiagot1105
| 2022-02-22T20:58:23Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-lar-xlsr-es-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lar-xlsr-es-col
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0947
- Wer: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8446 | 8.51 | 400 | 2.8174 | 0.9854 |
| 0.5146 | 17.02 | 800 | 0.1022 | 0.2020 |
| 0.0706 | 25.53 | 1200 | 0.0947 | 0.1884 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yancong/distilbert-base-uncased-finetuned-existence
|
yancong
| 2022-02-22T20:56:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-existence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-existence
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9532 | 1.0 | 221 | 2.1697 |
| 2.0959 | 2.0 | 442 | 1.9725 |
| 1.9277 | 3.0 | 663 | 1.7944 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shibli/wav2vec2-large-xls-r-300m-pun-colab
|
shibli
| 2022-02-22T18:51:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pun-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pun-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
elena-soare/t5-base-ecommerce
|
elena-soare
| 2022-02-22T18:19:10Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
T5 pre-trained on e-commerce data
|
ronanki/ml_mpnet_768_MNR_10
|
ronanki
| 2022-02-22T18:14:36Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/ml_mpnet_768_MNR_10
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_mpnet_768_MNR_10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/ml_mpnet_768_MNR_10')
model = AutoModel.from_pretrained('ronanki/ml_mpnet_768_MNR_10')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_mpnet_768_MNR_10)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 29 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ronanki/ml_use_512_MNR_10
|
ronanki
| 2022-02-22T18:12:25Z | 125 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ronanki/ml_use_512_MNR_10
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_use_512_MNR_10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_use_512_MNR_10)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 29 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NeonBohdan/stt-polyglot-it
|
NeonBohdan
| 2022-02-22T17:49:20Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
NeonBohdan/stt-polyglot-de
|
NeonBohdan
| 2022-02-22T17:39:43Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
NeonBohdan/stt-polyglot-pl
|
NeonBohdan
| 2022-02-22T17:27:31Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
NeonBohdan/stt-polyglot-fr
|
NeonBohdan
| 2022-02-22T17:23:49Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k
|
vocab-transformers
| 2022-02-22T17:03:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
# Model
This model is based on [nicoladecao/msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
This model has been trained with MLM on the MS MARCO corpus collection for 400k steps. See train_mlm.py for the train script. It was run on 2x V100 GPUs. The word embedding matrix was frozen.
|
keras-io/convmixer
|
keras-io
| 2022-02-22T16:42:59Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"ConvMixer",
"keras-io",
"en",
"dataset:cifar10",
"arxiv:2201.09792",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- ConvMixer
- keras-io
license: apache-2.0
datasets:
- cifar10
---
# ConvMixer model
The ConvMixer model is trained on Cifar10 dataset and is based on [the paper](https://arxiv.org/abs/2201.09792v1), [github](https://github.com/locuslab/convmixer).
Disclaimer : This is a demo model for Sayak Paul's keras [example](https://keras.io/examples/vision/convmixer/). Please refrain from using this model for any other purpose.
## Description
The paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like [ViT](https://arxiv.org/abs/2010.11929v2). One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.
### Intended Use
This model is intended to be used as a demo model for keras-io.
|
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
|
vocab-transformers
| 2022-02-22T12:09:18Z | 87 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
**Note: Token embeddings where updated!**
This model is based on [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated](https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated) with a 256k sized vocabulary initialized with word2vec that has been trained with MLM for 785k.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: 35.20 (MRR@10)
- TREC-DL 2019: 67.61 (nDCG@10)
- TREC-DL 2020: 69.62 (nDCG@10)
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:42:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:32:33Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) (This model) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:30:02Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-amharic](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) (This model) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
MahsaShahidi/Persian-Image-Captioning
|
MahsaShahidi
| 2022-02-22T10:49:24Z | 55 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
name: Persian-Image-Captioning
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Image-Captioning
This model is a fine-tuned version of [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad
|
cammy
| 2022-02-22T09:29:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-pad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4168
- Rouge1: 26.2506
- Rouge2: 10.7802
- Rougel: 19.2236
- Rougelsum: 22.6883
- Gen Len: 68.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1434 | 1.0 | 1000 | 0.4168 | 26.2506 | 10.7802 | 19.2236 | 22.6883 | 68.74 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/distilbart-cnn-12-6-finetuned-weaksup-1000
|
cammy
| 2022-02-22T08:49:00Z | 40 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-weaksup-1000
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
- Rouge1: 25.9199
- Rouge2: 11.2697
- Rougel: 20.3598
- Rougelsum: 22.8242
- Gen Len: 66.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.644 | 1.0 | 1000 | 1.6818 | 25.9199 | 11.2697 | 20.3598 | 22.8242 | 66.44 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vocab-transformers/msmarco-distilbert-word2vec256k-MLM_230k
|
vocab-transformers
| 2022-02-22T08:25:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
# Model
This model is based on [nicoladecao/msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
This model has been trained with MLM on the MS MARCO corpus collection for 230k steps. See train_mlm.py for the train script. It was run on 2x V100 GPUs. The word embedding matrix was frozen.
|
cammy/bart-large-cnn-finetuned-weaksup-1000
|
cammy
| 2022-02-22T06:34:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6325
- Rouge1: 26.1954
- Rouge2: 10.7128
- Rougel: 19.3873
- Rougelsum: 22.785
- Gen Len: 66.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.3896 | 1.0 | 1000 | 1.6325 | 26.1954 | 10.7128 | 19.3873 | 22.785 | 66.85 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Santiagot1105/wav2vec2-lar-xlsr-finetune-es-col
|
Santiagot1105
| 2022-02-22T06:32:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-lar-xlsr-finetune-es-col
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lar-xlsr-finetune-es-col
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Wer: 0.2595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1108 | 8.51 | 400 | 0.5936 | 0.6085 |
| 0.3015 | 17.02 | 800 | 0.2071 | 0.2941 |
| 0.0989 | 25.53 | 1200 | 0.1669 | 0.2595 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Fan-s/reddit-tc-bert
|
Fan-s
| 2022-02-22T05:25:39Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-base
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# label_list
label_list = ['matched', 'unmatched']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert")
# Set the input
post = "don't make gravy with asbestos."
response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto."
# Predict whether the two sentences are matched
def predict(post, response, max_seq_length=128):
with torch.no_grad():
args = (post, response)
input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt")
output = model(**input)
logits = output.logits
item = torch.argmax(logits, dim=1)
predict_label = label_list[item]
return predict_label, logits
predict_label, logits = predict(post, response)
# Matched
print("predict_label:", predict_label)
```
|
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000
|
ASCCCCCCCC
| 2022-02-22T02:51:29Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-chinese-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1683
- Accuracy: 0.5224
- F1: 0.5194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2051 | 1.0 | 2500 | 1.1717 | 0.506 | 0.4847 |
| 1.0035 | 2.0 | 5000 | 1.1683 | 0.5224 | 0.5194 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
speech-seq2seq/wav2vec2-2-gpt2-no-adapter
|
speech-seq2seq
| 2022-02-22T02:47:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1277
- Wer: 1.0334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7015 | 0.28 | 500 | 5.3313 | 1.9454 |
| 4.7239 | 0.56 | 1000 | 5.1316 | 1.9288 |
| 4.6686 | 0.84 | 1500 | 4.8812 | 1.9646 |
| 4.0138 | 1.12 | 2000 | 4.8274 | 1.8905 |
| 3.6314 | 1.4 | 2500 | 3.8913 | 1.7298 |
| 1.9511 | 1.68 | 3000 | 2.3486 | 1.3674 |
| 1.212 | 1.96 | 3500 | 1.6223 | 1.1877 |
| 0.8092 | 2.24 | 4000 | 1.3949 | 1.1049 |
| 0.497 | 2.52 | 4500 | 1.2544 | 1.0749 |
| 0.4401 | 2.8 | 5000 | 1.1277 | 1.0334 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-bart-large-no-adapter-frozen-enc
|
speech-seq2seq
| 2022-02-22T01:08:44Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 18.7898
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5396 | 0.28 | 500 | 9.0401 | 1.0120 |
| 5.898 | 0.56 | 1000 | 9.3199 | 1.0 |
| 4.9595 | 0.84 | 1500 | 8.4434 | 1.4563 |
| 5.7082 | 1.12 | 2000 | 15.1805 | 1.0000 |
| 5.4377 | 1.4 | 2500 | 15.7984 | 1.0021 |
| 5.5941 | 1.68 | 3000 | 18.4928 | 1.0 |
| 5.0662 | 1.96 | 3500 | 17.4886 | 1.0000 |
| 4.8363 | 2.24 | 4000 | 18.9458 | 1.0 |
| 4.7908 | 2.52 | 4500 | 18.2794 | 1.0006 |
| 4.679 | 2.8 | 5000 | 18.7898 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
keras-io/bidirectional-lstm-imdb
|
keras-io
| 2022-02-22T00:28:40Z | 20 | 0 |
tf-keras
|
[
"tf-keras",
"text-classification",
"en",
"dataset:imdb",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
datasets:
- imdb
tags:
- text-classification
widget:
- text: "I like that movie, but I'm not sure if it's my favorite."
---
## Keras Implementation of Bidirectional LSTMs for Sentiment Analysis on IMDB 🍿🎥
This repo contains the model and the notebook [on Bidirectional LSTMs for Sentiment Analysis on IMDB](https://keras.io/examples/nlp/bidirectional_lstm_imdb/).
Full credits to: [François Chollet](https://github.com/fchollet)
HF Contribution: [Drishti Sharma](https://huggingface.co/DrishtiSharma)
### Metrics after 10 epochs:
- train_loss: 0.2085
- train_acc: 0.9194
- val_loss: 0.3019
- val_acc: 0.8778
|
devrim/prism-default
|
devrim
| 2022-02-21T23:17:19Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: mit
---
The default Prism model available at https://github.com/thompsonb/prism. See the [README.md](https://github.com/thompsonb/prism/blob/master/README.md) file for more information.
**LICENCE NOTICE**
```
MIT License
Copyright (c) Brian Thompson
Portions of this software are copied from fairseq (https://github.com/pytorch/fairseq),
which is released under the MIT License and Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.