pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.2385
- eval_wer: 1.0
- eval_runtime: 145.9952
- eval_samples_per_second: 11.507
- eval_steps_per_second: 1.438
- epoch: 0.25
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
juanhebert/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.2385
- eval_wer: 1.0
- eval_runtime: 145.9952
- eval_samples_per_second: 11.507
- eval_steps_per_second: 1.438
- epoch: 0.25
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.2385\n- eval_wer: 1.0\n- eval_runtime: 145.9952\n- eval_samples_per_second: 11.507\n- eval_steps_per_second: 1.438\n- epoch: 0.25\n- step: 200",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 5\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 3.2385\n- eval_wer: 1.0\n- eval_runtime: 145.9952\n- eval_samples_per_second: 11.507\n- eval_steps_per_second: 1.438\n- epoch: 0.25\n- step: 200",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 5\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-indonesia
This model is a fine-tuned version of [juanhebert/wav2vec2-indonesia](https://huggingface.co/juanhebert/wav2vec2-indonesia) on the commonvoice "id" dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0727
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.8744 | 0.68 | 200 | 3.0301 | 1.0 |
| 2.868 | 1.36 | 400 | 3.0727 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-indonesia", "results": []}]}
|
juanhebert/wav2vec2-indonesia
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-indonesia
==================
This model is a fine-tuned version of juanhebert/wav2vec2-indonesia on the commonvoice "id" dataset.
It achieves the following results on the evaluation set:
* Loss: 3.0727
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 5
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-thai-test", "results": []}]}
|
juierror/wav2vec2-large-xls-r-thai-test
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xls-r-thai-test\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.7728\n- eval_wer: 0.9490\n- eval_runtime: 678.2819\n- eval_samples_per_second: 3.226\n- eval_steps_per_second: 0.404\n- epoch: 2.56\n- step: 600",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 400\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-thai-test\n\nThis model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.7728\n- eval_wer: 0.9490\n- eval_runtime: 678.2819\n- eval_samples_per_second: 3.226\n- eval_steps_per_second: 0.404\n- epoch: 2.56\n- step: 600",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 400\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry Potter DialogGPT Model
|
{"tags": ["conversational"]}
|
julianolf/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialogGPT Model
|
[
"# Harry Potter DialogGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialogGPT Model"
] |
audio-to-audio
|
asteroid
|
## Asteroid model `mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean`
♻️ Imported from https://zenodo.org/record/3903795#.X8pMBRNKjUI
This model was trained by Manuel Pariente using the wham/DPRNN recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the sep_clean task of the WHAM! dataset.
### Demo: How to use in Asteroid
```python
# coming soon
```
### Training config
- data:
- mode: min
- nondefault_nsrc: None
- sample_rate: 8000
- segment: 2.0
- task: sep_clean
- train_dir: data/wav8k/min/tr
- valid_dir: data/wav8k/min/cv
- filterbank:
- kernel_size: 16
- n_filters: 64
- stride: 8
- main_args:
- exp_dir: exp/train_dprnn_ks16/
- help: None
- masknet:
- bidirectional: True
- bn_chan: 128
- chunk_size: 100
- dropout: 0
- hid_size: 128
- hop_size: 50
- in_chan: 64
- mask_act: sigmoid
- n_repeats: 6
- n_src: 2
- out_chan: 64
- optim:
- lr: 0.001
- optimizer: adam
- weight_decay: 1e-05
- positional arguments:
- training:
- batch_size: 6
- early_stop: True
- epochs: 200
- gradient_clipping: 5
- half_lr: True
- num_workers: 6
#### Results
- `si_sdr`: 18.227683982688003
- `si_sdr_imp`: 18.22883576588251
- `sdr`: 18.617789605060587
- `sdr_imp`: 18.466745426438173
- `sir`: 29.22773720052717
- `sir_imp`: 29.07669302190474
- `sar`: 19.116352171914485
- `sar_imp`: -130.06009796503054
- `stoi`: 0.9722025377865715
- `stoi_imp`: 0.23415680987800583
### Citing Asteroid
```BibTex
@inproceedings{Pariente2020Asteroid,
title={Asteroid: the {PyTorch}-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and
Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and
Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge
and Emmanuel Vincent},
year={2020},
booktitle={Proc. Interspeech},
}
```
Or on arXiv:
```bibtex
@misc{pariente2020asteroid,
title={Asteroid: the PyTorch-based audio source separation toolkit for researchers},
author={Manuel Pariente and Samuele Cornell and Joris Cosentino and Sunit Sivasankaran and Efthymios Tzinis and Jens Heitkaemper and Michel Olvera and Fabian-Robert Stöter and Mathieu Hu and Juan M. Martín-Doñas and David Ditter and Ariel Frank and Antoine Deleforge and Emmanuel Vincent},
year={2020},
eprint={2005.04132},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
{"license": "cc-by-sa-4.0", "tags": ["audio-to-audio", "asteroid", "audio", "audio-source-separation"], "datasets": ["wham", "sep_clean"]}
|
julien-c/DPRNNTasNet-ks16_WHAM_sepclean
| null |
[
"asteroid",
"pytorch",
"audio-to-audio",
"audio",
"audio-source-separation",
"dataset:wham",
"dataset:sep_clean",
"arxiv:2005.04132",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2005.04132"
] |
[] |
TAGS
#asteroid #pytorch #audio-to-audio #audio #audio-source-separation #dataset-wham #dataset-sep_clean #arxiv-2005.04132 #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean'
️ Imported from URL
This model was trained by Manuel Pariente using the wham/DPRNN recipe in Asteroid. It was trained on the sep_clean task of the WHAM! dataset.
### Demo: How to use in Asteroid
### Training config
- data:
- mode: min
- nondefault_nsrc: None
- sample_rate: 8000
- segment: 2.0
- task: sep_clean
- train_dir: data/wav8k/min/tr
- valid_dir: data/wav8k/min/cv
- filterbank:
- kernel_size: 16
- n_filters: 64
- stride: 8
- main_args:
- exp_dir: exp/train_dprnn_ks16/
- help: None
- masknet:
- bidirectional: True
- bn_chan: 128
- chunk_size: 100
- dropout: 0
- hid_size: 128
- hop_size: 50
- in_chan: 64
- mask_act: sigmoid
- n_repeats: 6
- n_src: 2
- out_chan: 64
- optim:
- lr: 0.001
- optimizer: adam
- weight_decay: 1e-05
- positional arguments:
- training:
- batch_size: 6
- early_stop: True
- epochs: 200
- gradient_clipping: 5
- half_lr: True
- num_workers: 6
#### Results
- 'si_sdr': 18.227683982688003
- 'si_sdr_imp': 18.22883576588251
- 'sdr': 18.617789605060587
- 'sdr_imp': 18.466745426438173
- 'sir': 29.22773720052717
- 'sir_imp': 29.07669302190474
- 'sar': 19.116352171914485
- 'sar_imp': -130.06009796503054
- 'stoi': 0.9722025377865715
- 'stoi_imp': 0.23415680987800583
### Citing Asteroid
Or on arXiv:
|
[
"## Asteroid model 'mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean'\n\n️ Imported from URL\n\nThis model was trained by Manuel Pariente using the wham/DPRNN recipe in Asteroid. It was trained on the sep_clean task of the WHAM! dataset.",
"### Demo: How to use in Asteroid",
"### Training config\n\n- data:\n\t- mode: min\n\t- nondefault_nsrc: None\n\t- sample_rate: 8000\n\t- segment: 2.0\n\t- task: sep_clean\n\t- train_dir: data/wav8k/min/tr\n\t- valid_dir: data/wav8k/min/cv\n- filterbank:\n\t- kernel_size: 16\n\t- n_filters: 64\n\t- stride: 8\n- main_args:\n\t- exp_dir: exp/train_dprnn_ks16/\n\t- help: None\n- masknet:\n\t- bidirectional: True\n\t- bn_chan: 128\n\t- chunk_size: 100\n\t- dropout: 0\n\t- hid_size: 128\n\t- hop_size: 50\n\t- in_chan: 64\n\t- mask_act: sigmoid\n\t- n_repeats: 6\n\t- n_src: 2\n\t- out_chan: 64\n- optim:\n\t- lr: 0.001\n\t- optimizer: adam\n\t- weight_decay: 1e-05\n- positional arguments:\n- training:\n\t- batch_size: 6\n\t- early_stop: True\n\t- epochs: 200\n\t- gradient_clipping: 5\n\t- half_lr: True\n\t- num_workers: 6",
"#### Results\n\n- 'si_sdr': 18.227683982688003\n- 'si_sdr_imp': 18.22883576588251\n- 'sdr': 18.617789605060587\n- 'sdr_imp': 18.466745426438173\n- 'sir': 29.22773720052717\n- 'sir_imp': 29.07669302190474\n- 'sar': 19.116352171914485\n- 'sar_imp': -130.06009796503054\n- 'stoi': 0.9722025377865715\n- 'stoi_imp': 0.23415680987800583",
"### Citing Asteroid\n\n\n\nOr on arXiv:"
] |
[
"TAGS\n#asteroid #pytorch #audio-to-audio #audio #audio-source-separation #dataset-wham #dataset-sep_clean #arxiv-2005.04132 #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'mpariente/DPRNNTasNet(ks=16)_WHAM!_sepclean'\n\n️ Imported from URL\n\nThis model was trained by Manuel Pariente using the wham/DPRNN recipe in Asteroid. It was trained on the sep_clean task of the WHAM! dataset.",
"### Demo: How to use in Asteroid",
"### Training config\n\n- data:\n\t- mode: min\n\t- nondefault_nsrc: None\n\t- sample_rate: 8000\n\t- segment: 2.0\n\t- task: sep_clean\n\t- train_dir: data/wav8k/min/tr\n\t- valid_dir: data/wav8k/min/cv\n- filterbank:\n\t- kernel_size: 16\n\t- n_filters: 64\n\t- stride: 8\n- main_args:\n\t- exp_dir: exp/train_dprnn_ks16/\n\t- help: None\n- masknet:\n\t- bidirectional: True\n\t- bn_chan: 128\n\t- chunk_size: 100\n\t- dropout: 0\n\t- hid_size: 128\n\t- hop_size: 50\n\t- in_chan: 64\n\t- mask_act: sigmoid\n\t- n_repeats: 6\n\t- n_src: 2\n\t- out_chan: 64\n- optim:\n\t- lr: 0.001\n\t- optimizer: adam\n\t- weight_decay: 1e-05\n- positional arguments:\n- training:\n\t- batch_size: 6\n\t- early_stop: True\n\t- epochs: 200\n\t- gradient_clipping: 5\n\t- half_lr: True\n\t- num_workers: 6",
"#### Results\n\n- 'si_sdr': 18.227683982688003\n- 'si_sdr_imp': 18.22883576588251\n- 'sdr': 18.617789605060587\n- 'sdr_imp': 18.466745426438173\n- 'sir': 29.22773720052717\n- 'sir_imp': 29.07669302190474\n- 'sar': 19.116352171914485\n- 'sar_imp': -130.06009796503054\n- 'stoi': 0.9722025377865715\n- 'stoi_imp': 0.23415680987800583",
"### Citing Asteroid\n\n\n\nOr on arXiv:"
] |
token-classification
|
transformers
|
# EsperBERTo: RoBERTa-like Language model trained on Esperanto
**Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥
## Training Details
- current checkpoint: 566000
- machine name: `galinette`

## Example pipeline
```python
from transformers import TokenClassificationPipeline, pipeline
MODEL_PATH = "./models/EsperBERTo-small-pos/"
nlp = pipeline(
"ner",
model=MODEL_PATH,
tokenizer=MODEL_PATH,
)
# or instantiate a TokenClassificationPipeline directly.
nlp("Mi estas viro kej estas tago varma.")
# {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'}
# {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'}
# {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'}
# {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'}
# {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'}
```
|
{"language": "eo", "thumbnail": "https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png", "widget": [{"text": "Mi estas viro kej estas tago varma."}]}
|
julien-c/EsperBERTo-small-pos
| null |
[
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"roberta",
"token-classification",
"eo",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eo"
] |
TAGS
#transformers #pytorch #jax #onnx #safetensors #roberta #token-classification #eo #autotrain_compatible #endpoints_compatible #region-us
|
# EsperBERTo: RoBERTa-like Language model trained on Esperanto
Companion model to blog post URL
## Training Details
- current checkpoint: 566000
- machine name: 'galinette'

## Example pipeline
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="julien-c/EsperBERTo-small",
tokenizer="julien-c/EsperBERTo-small"
)
fill_mask("Jen la komenco de bela <mask>.")
# This is the beginning of a beautiful <mask>.
# =>
# {
# 'score':0.06502299010753632
# 'sequence':'<s> Jen la komenco de bela vivo.</s>'
# 'token':1099
# }
# {
# 'score':0.0421181358397007
# 'sequence':'<s> Jen la komenco de bela vespero.</s>'
# 'token':5100
# }
# {
# 'score':0.024884626269340515
# 'sequence':'<s> Jen la komenco de bela laboro.</s>'
# 'token':1570
# }
# {
# 'score':0.02324388362467289
# 'sequence':'<s> Jen la komenco de bela tago.</s>'
# 'token':1688
# }
# {
# 'score':0.020378097891807556
# 'sequence':'<s> Jen la komenco de bela festo.</s>'
# 'token':4580
# }
```
|
{"language": "eo", "thumbnail": "https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png", "widget": [{"text": "Jen la komenco de bela <mask>."}, {"text": "Uno du <mask>"}, {"text": "Jen fini\u011das bela <mask>."}]}
|
julien-c/EsperBERTo-small
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"eo",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eo"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #fill-mask #eo #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# EsperBERTo: RoBERTa-like Language model trained on Esperanto
Companion model to blog post URL
## Training Details
- current checkpoint: 566000
- machine name: 'galinette'

model = BertForMaskedLM(config)
model.save_pretrained(DIRNAME)
tf_model = TFBertForMaskedLM.from_pretrained(DIRNAME, from_pt=True)
tf_model.save_pretrained(DIRNAME)
# Slightly different for tokenizer.
# tokenizer = BertTokenizer.from_pretrained(DIRNAME)
# tokenizer.save_pretrained()
```
|
{}
|
julien-c/bert-xsmall-dummy
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
## How to build a dummy model
|
[
"## How to build a dummy model"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"## How to build a dummy model"
] |
feature-extraction
|
transformers
|
# Distilbert, used as a Feature Extractor
|
{"tags": ["feature-extraction"], "widget": [{"text": "Hello world"}]}
|
julien-c/distilbert-feature-extraction
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #feature-extraction #endpoints_compatible #has_space #region-us
|
# Distilbert, used as a Feature Extractor
|
[
"# Distilbert, used as a Feature Extractor"
] |
[
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #endpoints_compatible #has_space #region-us \n",
"# Distilbert, used as a Feature Extractor"
] |
text-classification
|
transformers
|
## distilbert-sagemaker-1609802168
Trained from SageMaker HuggingFace extension.
Fine-tuned from [distilbert-base-uncased](/distilbert-base-uncased) on [imdb](/datasets/imdb) 🔥
#### Eval
| key | value |
| --- | ----- |
| eval_loss | 0.19187863171100616 |
| eval_accuracy | 0.9259 |
| eval_f1 | 0.9272173656811707 |
| eval_precision | 0.9147286821705426 |
| eval_recall | 0.9400517825134436 |
| epoch | 1.0 |
|
{"tags": ["sagemaker"], "datasets": ["imdb"]}
|
julien-c/distilbert-sagemaker-1609802168
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"sagemaker",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #sagemaker #dataset-imdb #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-sagemaker-1609802168
-------------------------------
Trained from SageMaker HuggingFace extension.
Fine-tuned from distilbert-base-uncased on imdb
#### Eval
|
[
"#### Eval"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #sagemaker #dataset-imdb #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Eval"
] |
null | null |
in the editor i only change this line
Example of a hf.co repo containing signed commits.
hello tabs
|
{}
|
julien-c/dummy-for-flat
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
in the editor i only change this line
Example of a URL repo containing signed commits.
hello tabs
|
[] |
[
"TAGS\n#region-us \n"
] |
fill-mask
|
transformers
|
## Dummy model used for unit testing and CI
```python
import json
import os
from transformers import RobertaConfig, RobertaForMaskedLM, TFRobertaForMaskedLM
DIRNAME = "./dummy-unknown"
config = RobertaConfig(10, 20, 1, 1, 40)
model = RobertaForMaskedLM(config)
model.save_pretrained(DIRNAME)
tf_model = TFRobertaForMaskedLM.from_pretrained(DIRNAME, from_pt=True)
tf_model.save_pretrained(DIRNAME)
# Tokenizer:
vocab = [
"l",
"o",
"w",
"e",
"r",
"s",
"t",
"i",
"d",
"n",
"\u0120",
"\u0120l",
"\u0120n",
"\u0120lo",
"\u0120low",
"er",
"\u0120lowest",
"\u0120newer",
"\u0120wider",
"<unk>",
]
vocab_tokens = dict(zip(vocab, range(len(vocab))))
merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""]
vocab_file = os.path.join(DIRNAME, "vocab.json")
merges_file = os.path.join(DIRNAME, "merges.txt")
with open(vocab_file, "w", encoding="utf-8") as fp:
fp.write(json.dumps(vocab_tokens) + "\n")
with open(merges_file, "w", encoding="utf-8") as fp:
fp.write("\n".join(merges))
```
|
{"tags": ["ci"]}
|
julien-c/dummy-unknown
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"ci",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #ci #autotrain_compatible #endpoints_compatible #region-us
|
## Dummy model used for unit testing and CI
|
[
"## Dummy model used for unit testing and CI"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #ci #autotrain_compatible #endpoints_compatible #region-us \n",
"## Dummy model used for unit testing and CI"
] |
null |
fasttext
|
## FastText model for language identification
#### ♻️ Imported from https://fasttext.cc/docs/en/language-identification.html
> [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification
```bibtex
@article{joulin2016bag,
title={Bag of Tricks for Efficient Text Classification},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas},
journal={arXiv preprint arXiv:1607.01759},
year={2016}
}
```
> [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, FastText.zip: Compressing text classification models
```bibtex
@article{joulin2016fasttext,
title={FastText.zip: Compressing text classification models},
author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{\'e}gou, H{\'e}rve and Mikolov, Tomas},
journal={arXiv preprint arXiv:1612.03651},
year={2016}
}
```
|
{"language": "multilingual", "license": "cc-by-sa-4.0", "library_name": "fasttext", "tags": ["fasttext"], "datasets": ["wikipedia", "tatoeba", "setimes"], "inference": false}
|
julien-c/fasttext-language-id
| null |
[
"fasttext",
"multilingual",
"dataset:wikipedia",
"dataset:tatoeba",
"dataset:setimes",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#fasttext #multilingual #dataset-wikipedia #dataset-tatoeba #dataset-setimes #license-cc-by-sa-4.0 #has_space #region-us
|
## FastText model for language identification
#### ️ Imported from URL
> [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification
> [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, URL: Compressing text classification models
|
[
"## FastText model for language identification",
"#### ️ Imported from URL\n\n> [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification\n\n\n\n> [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, URL: Compressing text classification models"
] |
[
"TAGS\n#fasttext #multilingual #dataset-wikipedia #dataset-tatoeba #dataset-setimes #license-cc-by-sa-4.0 #has_space #region-us \n",
"## FastText model for language identification",
"#### ️ Imported from URL\n\n> [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, Bag of Tricks for Efficient Text Classification\n\n\n\n> [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, URL: Compressing text classification models"
] |
token-classification
|
flair
|
## Flair NER model `de-ner-conll03-v0.4.pt`
Imported from https://nlp.informatik.hu-berlin.de/resources/models/de-ner/
### Demo: How to use in Flair
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence(
"Mein Name ist Julien, ich lebe zurzeit in Paris, ich arbeite bei Hugging Face, Inc."
)
tagger = SequenceTagger.load("julien-c/flair-de-ner")
# predict NER tags
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
```
yields the following output:
> `Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .`
### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
|
{"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "inference": false}
|
julien-c/flair-de-ner
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:conll2003",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #de #dataset-conll2003 #region-us
|
## Flair NER model 'de-ner-conll03-v0.4.pt'
Imported from URL
### Demo: How to use in Flair
yields the following output:
> 'Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .'
### Thanks @stefan-it for the Flair integration ️
|
[
"## Flair NER model 'de-ner-conll03-v0.4.pt'\n\nImported from URL",
"### Demo: How to use in Flair\n\n\n\nyields the following output:\n\n> 'Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .'",
"### Thanks @stefan-it for the Flair integration ️"
] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #de #dataset-conll2003 #region-us \n",
"## Flair NER model 'de-ner-conll03-v0.4.pt'\n\nImported from URL",
"### Demo: How to use in Flair\n\n\n\nyields the following output:\n\n> 'Mein Name ist Julien <S-PER> , ich lebe zurzeit in Paris <S-LOC> , ich arbeite bei Hugging <B-ORG> Face <E-ORG> , Inc <S-ORG> .'",
"### Thanks @stefan-it for the Flair integration ️"
] |
token-classification
|
flair
|
## Flair NER model `en-ner-conll03-v0.4.pt`
Imported from https://nlp.informatik.hu-berlin.de/resources/models/ner/
### Demo: How to use in Flair
```python
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence(
"My name is Julien, I currently live in Paris, I work at Hugging Face, Inc."
)
tagger = SequenceTagger.load("julien-c/flair-ner")
# predict NER tags
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
```
yields the following output:
> `My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .`
### Thanks [@stefan-it](https://huggingface.co/stefan-it) for the Flair integration ❤️ 🔥
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "inference": false}
|
julien-c/flair-ner
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #en #dataset-conll2003 #region-us
|
## Flair NER model 'en-ner-conll03-v0.4.pt'
Imported from URL
### Demo: How to use in Flair
yields the following output:
> 'My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .'
### Thanks @stefan-it for the Flair integration ️
|
[
"## Flair NER model 'en-ner-conll03-v0.4.pt'\n\nImported from URL",
"### Demo: How to use in Flair\n\n\n\nyields the following output:\n\n> 'My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .'",
"### Thanks @stefan-it for the Flair integration ️"
] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #en #dataset-conll2003 #region-us \n",
"## Flair NER model 'en-ner-conll03-v0.4.pt'\n\nImported from URL",
"### Demo: How to use in Flair\n\n\n\nyields the following output:\n\n> 'My name is Julien <S-PER> , I currently live in Paris <S-LOC> , I work at Hugging <B-LOC> Face <E-LOC> .'",
"### Thanks @stefan-it for the Flair integration ️"
] |
image-classification
|
transformers
|
# hotdog-not-hotdog
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### hot dog

#### not hot dog

|
{"tags": ["image-classification", "huggingpics"], "metrics": ["accuracy"]}
|
julien-c/hotdog-not-hotdog
| null |
[
"transformers",
"pytorch",
"tensorboard",
"coreml",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #coreml #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# hotdog-not-hotdog
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### hot dog
!hot dog
#### not hot dog
!miscellaneous
|
[
"# hotdog-not-hotdog\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### hot dog\n\n!hot dog",
"#### not hot dog\n\n!miscellaneous"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #coreml #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# hotdog-not-hotdog\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### hot dog\n\n!hot dog",
"#### not hot dog\n\n!miscellaneous"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381098/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Training

### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"], "inference": false}
|
julien-c/kan-bayashi-jsut_tts_train_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Training

### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Training\n\n",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Training\n\n",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
♻️ Imported from https://zenodo.org/record/3963886/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
Model id:
`kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best`
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"], "inference": false}
|
julien-c/kan-bayashi-jsut_tts_train_tacotron2_ja
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
Model id:
'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model \n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.\n\nModel id: \n'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model \n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.\n\nModel id: \n'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS model
### `kan-bayashi/csmsc_tacotron2`
♻️ Imported from https://zenodo.org/record/3969118
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"], "widget": [{"text": "\u8bf7\u60a8\u8bf4\u5f97\u6162\u4e9b\u597d\u5417"}]}
|
julien-c/kan-bayashi_csmsc_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS model
### 'kan-bayashi/csmsc_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tacotron2'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tacotron2'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3989498#.X90RlOlKjkM
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Training config
See full config in [`config.yaml`](./config.yaml)
```yaml
config: conf/tuning/train_tacotron2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_tacotron2_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"], "widget": [{"text": "Hello, how are you doing?"}]}
|
julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
### Training config
See full config in 'URL'
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best`
♻️ Imported from https://zenodo.org/record/3957940#.X90XNelKjkM
This model was trained by kamo-naoyuki using mini_an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["ljspeech"]}
|
julien-c/mini_an4_asr_train_raw_bpe_valid
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using mini_an4 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using mini_an4 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using mini_an4 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9150
- Accuracy: 0.2662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0528 | 0.44 | 1000 | 3.0265 | 0.2223 |
| 2.9836 | 0.89 | 2000 | 2.9263 | 0.2332 |
| 2.7409 | 1.33 | 3000 | 2.9041 | 0.2533 |
| 2.7905 | 1.77 | 4000 | 2.8763 | 0.2606 |
| 2.4359 | 2.22 | 5000 | 2.9072 | 0.2642 |
| 2.4507 | 2.66 | 6000 | 2.9230 | 0.2644 |
### Framework versions
- Transformers 4.7.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated-from-trainer"], "datasets": ["julien-c/reactiongif"], "metrics": ["accuracy"]}
|
julien-c/reactiongif-roberta
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated-from-trainer",
"dataset:julien-c/reactiongif",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #generated-from-trainer #dataset-julien-c/reactiongif #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
model
=====
This model is a fine-tuned version of distilroberta-base on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9150
* Accuracy: 0.2662
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.7.0.dev0
* Pytorch 1.8.1+cu102
* Datasets 1.8.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.7.0.dev0\n* Pytorch 1.8.1+cu102\n* Datasets 1.8.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #generated-from-trainer #dataset-julien-c/reactiongif #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.7.0.dev0\n* Pytorch 1.8.1+cu102\n* Datasets 1.8.0\n* Tokenizers 0.10.3"
] |
null | null |
<style>
@import url('https://fonts.googleapis.com/css2?family=Roboto+Slab:wght@900&family=Rokkitt:wght@900&display=swap');
.text1 {
position: absolute;
top: 3vh;
left: calc(50% - 50vh);
}
.text2 {
position: absolute;
bottom: 4vh;
left: 50%;
}
.retro {
font-family: "Roboto Slab";
font-size: 13vh;
display: block;
color: #000;
text-shadow: -0.5vh 0 #8800aa, 0 0.5vh #8800aa, 0.5vh 0 #aa0088, 0 -0.5vh #aa0088;
}
</style>
<div class="text1">
<span class="retro">RETRO</span>
</div>
<div class="text2">
<span class="retro">WAVE</span>
</div>
<script type="module">
import * as THREE from "https://cdn.jsdelivr.net/npm/three@0.123.0/build/three.module.js";
import { OrbitControls } from "https://cdn.jsdelivr.net/npm/three@0.123.0/examples/jsm/controls/OrbitControls.js";
import { TWEEN } from "https://cdn.jsdelivr.net/npm/three@0.123.0/examples/jsm/libs/tween.module.min.js";
let scene = new THREE.Scene();
let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 100);
camera.position.set(-5, 10, 20);
let renderer = new THREE.WebGLRenderer({antialias: true});
renderer.setSize(innerWidth, innerHeight);
document.querySelector("div.prose").appendChild(renderer.domElement);
const textureCube = generateCubeMap();
let controls = new OrbitControls(camera, renderer.domElement);
controls.enableZoom = false;
controls.enablePan = false;
controls.enableKeys = false;
let square = new THREE.GridHelper(20, 1, 0xaaaaff, 0xaaaff);
square.position.y = 0.01;
scene.add(square);
let grid = new THREE.GridHelper(20, 10, "magenta", "magenta");
console.log(grid.geometry.attributes.position.count);
let moveable = [];
for(let i = 0; i < grid.geometry.attributes.position.count / 4; i++){
moveable.push(1, 1, 0, 0);
}
console.log(moveable.length)
grid.geometry.setAttribute("moveable", new THREE.Float32BufferAttribute(moveable, 1));
let uniforms = {
time: {value: 0},
speed: {value: 1},
size: {value: 20}
}
grid.material.onBeforeCompile = shader => {
shader.uniforms.time = uniforms.time;
shader.uniforms.speed = uniforms.speed;
shader.uniforms.size = uniforms.size;
shader.vertexShader = `
uniform float time;
uniform float speed;
uniform float size;
attribute float moveable;
${shader.vertexShader}
`.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
if (floor(moveable + 0.1) > 0.5){
float start = size * -0.5;
float zPos = mod( (position.z - start) + (time * speed), size) + start;
transformed.z = zPos;
}
`
);
console.log(shader.vertexShader)
}
scene.add(grid);
// palm
let base = new THREE.Object3D();
let baseSpline = new THREE.CatmullRomCurve3([
new THREE.Vector2(),
new THREE.Vector2(3, 0),
new THREE.Vector2(2.5, -7),
new THREE.Vector2(-4, -6),
new THREE.Vector2(-4.8, 0)
], true, "catmullrom", 0.1);
let baseG = new THREE.ExtrudeBufferGeometry(new THREE.Shape(baseSpline.getPoints(50)), {depth: 0.2, bevelEnabled: true, bevelThickness: 0.8, bevelSize: 0.2});
let baseObject = new THREE.Mesh(baseG, new THREE.MeshBasicMaterial({color: "magenta", wireframe: false, envMap: textureCube}));
base.add(baseObject);
scene.add(base);
let phalanxes = [];
let f1 = createFinger(new THREE.Object3D(), 0.8, false); // pinky
let f2 = createFinger(new THREE.Object3D(), 0.95, false); // ring
let f3 = createFinger(new THREE.Object3D(), 1, false); // middle
let f4 = createFinger(new THREE.Object3D(), 0.95, false); // index
let f5Base = new THREE.Object3D();
let f5 = createFinger(new THREE.Object3D(), 0.75, true); // thumb
f5Base.add(f5);
base.add(f1, f2, f3, f4, f5Base);
f1.position.set( -4, 0.2, 0);
f2.position.set( -2, 0.2, 0);
f3.position.set( 0, 0.2, 0);
f4.position.set( 2, 0.2, 0);
f5Base.position.set( 3, -3, 0);
f5Base.rotation.set( 0, 0, THREE.MathUtils.degToRad(-60));
f5Base.updateMatrixWorld();
let g = createPhalanxGeom(1, 3);
let m = new THREE.MeshBasicMaterial({color: "aqua", wireframe: false, envMap: textureCube});
let o = new THREE.InstancedMesh(g, m, phalanxes.length);
phalanxes.forEach( (ph, i) => {
ph.updateMatrixWorld();
o.setMatrixAt(i, ph.matrixWorld);
})
scene.add(o);
window.addEventListener( 'resize', onWindowResize, false );
let t = new TWEEN.Tween({value: Math.PI * 0.075})
.to({value: Math.PI * 0.45}, 4000)
.easing(TWEEN.Easing.Quadratic.InOut)
.repeat(Infinity)
.yoyo(true)
.onUpdate(val => {
phalanxes.forEach((ph, i) => {
ph.rotation.x = val.value;
ph.updateMatrixWorld();
o.setMatrixAt(i, ph.matrixWorld)
});
o.instanceMatrix.needsUpdate = true;
});
t.start();
let clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
let t = clock.getElapsedTime();
TWEEN.update();
uniforms.time.value = t;
base.rotation.x = (Math.sin(t * 0.125) * 0.5 + 0.5) * -Math.PI * 0.5;
base.rotation.y = -t * 0.125;
renderer.render(scene, camera);
});
function onWindowResize() {
camera.aspect = innerWidth / innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( innerWidth, innerHeight );
}
function createFinger(phalanx, scale, isThumb){
phalanxes.push(phalanx);
let current = phalanx;
for(let i = 0; i < (isThumb ? 1 : 2); i++){
let p = new THREE.Object3D();
p.position.y = 3;
p.scale.setScalar(0.85);
current.add(p);
phalanxes.push(p);
current = p;
}
phalanx.scale.setScalar(scale);
return phalanx;
}
function createPhalanxGeom(R, L){
let r = R * 0.85;
let R1 = R - r;
let a = Math.asin(R1 / L);
let path = new THREE.Path();
path.absarc(0, 0, R, Math.PI * 1.5, a);
path.absarc(0, L, r, a, Math.PI * 0.5);
let pts = path.getPoints(5);
let g = new THREE.LatheBufferGeometry(pts);
return g;
}
function generateCubeMap(){
let images = [];
let c = document.createElement("canvas");
c.width = 4;
c.height = c.width;
let ctx = c.getContext("2d");
for(let i= 0; i < 6;i++){
ctx.fillStyle = "#fff";
ctx.fillRect(0, 0, c.width, c.height);
for(let j = 0; j < (c.width * c.height) / 2; j++){
ctx.fillStyle = Math.random() < 0.5 ? "#f0f" : "#40f";
ctx.fillRect(
Math.floor(Math.random() * c.width),
Math.floor(Math.random() * c.height),
2,
1
);
}
images.push(c.toDataURL());
}
let cm = new THREE.CubeTextureLoader().load(images);
console.log(cm);
return cm;
}
</script>
|
{}
|
julien-c/roberta-threejs
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
<style>
@import url('URL
.text1 {
position: absolute;
top: 3vh;
left: calc(50% - 50vh);
}
.text2 {
position: absolute;
bottom: 4vh;
left: 50%;
}
.retro {
font-family: "Roboto Slab";
font-size: 13vh;
display: block;
color: #000;
text-shadow: -0.5vh 0 #8800aa, 0 0.5vh #8800aa, 0.5vh 0 #aa0088, 0 -0.5vh #aa0088;
}
</style>
<div class="text1">
<span class="retro">RETRO</span>
</div>
<div class="text2">
<span class="retro">WAVE</span>
</div>
<script type="module">
import * as THREE from "URL
import { OrbitControls } from "URL
import { TWEEN } from "URL
let scene = new THREE.Scene();
let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 100);
URL(-5, 10, 20);
let renderer = new THREE.WebGLRenderer({antialias: true});
renderer.setSize(innerWidth, innerHeight);
document.querySelector("URL").appendChild(renderer.domElement);
const textureCube = generateCubeMap();
let controls = new OrbitControls(camera, renderer.domElement);
controls.enableZoom = false;
controls.enablePan = false;
controls.enableKeys = false;
let square = new THREE.GridHelper(20, 1, 0xaaaaff, 0xaaaff);
square.position.y = 0.01;
URL(square);
let grid = new THREE.GridHelper(20, 10, "magenta", "magenta");
URL(URL);
let moveable = [];
for(let i = 0; i < URL / 4; i++){
URL(1, 1, 0, 0);
}
URL(URL)
grid.geometry.setAttribute("moveable", new THREE.Float32BufferAttribute(moveable, 1));
let uniforms = {
time: {value: 0},
speed: {value: 1},
size: {value: 20}
}
grid.material.onBeforeCompile = shader => {
URL = URL;
URL = URL;
URL = URL;
shader.vertexShader = '
uniform float time;
uniform float speed;
uniform float size;
attribute float moveable;
${shader.vertexShader}
'.replace(
'#include <begin_vertex>',
'#include <begin_vertex>
if (floor(moveable + 0.1) > 0.5){
float start = size * -0.5;
float zPos = mod( (position.z - start) + (time * speed), size) + start;
transformed.z = zPos;
}
'
);
URL(shader.vertexShader)
}
URL(grid);
// palm
let base = new THREE.Object3D();
let baseSpline = new THREE.CatmullRomCurve3([
new THREE.Vector2(),
new THREE.Vector2(3, 0),
new THREE.Vector2(2.5, -7),
new THREE.Vector2(-4, -6),
new THREE.Vector2(-4.8, 0)
], true, "catmullrom", 0.1);
let baseG = new THREE.ExtrudeBufferGeometry(new THREE.Shape(baseSpline.getPoints(50)), {depth: 0.2, bevelEnabled: true, bevelThickness: 0.8, bevelSize: 0.2});
let baseObject = new THREE.Mesh(baseG, new THREE.MeshBasicMaterial({color: "magenta", wireframe: false, envMap: textureCube}));
URL(baseObject);
URL(base);
let phalanxes = [];
let f1 = createFinger(new THREE.Object3D(), 0.8, false); // pinky
let f2 = createFinger(new THREE.Object3D(), 0.95, false); // ring
let f3 = createFinger(new THREE.Object3D(), 1, false); // middle
let f4 = createFinger(new THREE.Object3D(), 0.95, false); // index
let f5Base = new THREE.Object3D();
let f5 = createFinger(new THREE.Object3D(), 0.75, true); // thumb
URL(f5);
URL(f1, f2, f3, f4, f5Base);
URL( -4, 0.2, 0);
URL( -2, 0.2, 0);
URL( 0, 0.2, 0);
URL( 2, 0.2, 0);
URL( 3, -3, 0);
URL( 0, 0, THREE.MathUtils.degToRad(-60));
f5Base.updateMatrixWorld();
let g = createPhalanxGeom(1, 3);
let m = new THREE.MeshBasicMaterial({color: "aqua", wireframe: false, envMap: textureCube});
let o = new THREE.InstancedMesh(g, m, URL);
phalanxes.forEach( (ph, i) => {
ph.updateMatrixWorld();
o.setMatrixAt(i, ph.matrixWorld);
})
URL(o);
window.addEventListener( 'resize', onWindowResize, false );
let t = new TWEEN.Tween({value: Math.PI * 0.075})
.to({value: Math.PI * 0.45}, 4000)
.easing(TWEEN.Easing.Quadratic.InOut)
.repeat(Infinity)
.yoyo(true)
.onUpdate(val => {
phalanxes.forEach((ph, i) => {
ph.rotation.x = URL;
ph.updateMatrixWorld();
o.setMatrixAt(i, ph.matrixWorld)
});
o.instanceMatrix.needsUpdate = true;
});
t.start();
let clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
let t = clock.getElapsedTime();
URL();
URL = t;
base.rotation.x = (URL(t * 0.125) * 0.5 + 0.5) * -Math.PI * 0.5;
base.rotation.y = -t * 0.125;
URL(scene, camera);
});
function onWindowResize() {
URL = innerWidth / innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( innerWidth, innerHeight );
}
function createFinger(phalanx, scale, isThumb){
URL(phalanx);
let current = phalanx;
for(let i = 0; i < (isThumb ? 1 : 2); i++){
let p = new THREE.Object3D();
p.position.y = 3;
p.scale.setScalar(0.85);
URL(p);
URL(p);
current = p;
}
URL.setScalar(scale);
return phalanx;
}
function createPhalanxGeom(R, L){
let r = R * 0.85;
let R1 = R - r;
let a = URL(R1 / L);
let path = new THREE.Path();
URL(0, 0, R, Math.PI * 1.5, a);
URL(0, L, r, a, Math.PI * 0.5);
let pts = path.getPoints(5);
let g = new THREE.LatheBufferGeometry(pts);
return g;
}
function generateCubeMap(){
let images = [];
let c = document.createElement("canvas");
c.width = 4;
c.height = c.width;
let ctx = c.getContext("2d");
for(let i= 0; i < 6;i++){
ctx.fillStyle = "#fff";
ctx.fillRect(0, 0, c.width, c.height);
for(let j = 0; j < (c.width * c.height) / 2; j++){
ctx.fillStyle = URL() < 0.5 ? "#f0f" : "#40f";
ctx.fillRect(
URL(URL() * c.width),
URL(URL() * c.height),
2,
1
);
}
URL(c.toDataURL());
}
let cm = new THREE.CubeTextureLoader().load(images);
URL(cm);
return cm;
}
</script>
|
[] |
[
"TAGS\n#region-us \n"
] |
null | null |
## Dummy model containing only Tensorboard traces
from multiple different experiments
|
{}
|
julien-c/tensorboard-traces
| null |
[
"tensorboard",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#tensorboard #region-us
|
## Dummy model containing only Tensorboard traces
from multiple different experiments
|
[
"## Dummy model containing only Tensorboard traces\n\nfrom multiple different experiments"
] |
[
"TAGS\n#tensorboard #region-us \n",
"## Dummy model containing only Tensorboard traces\n\nfrom multiple different experiments"
] |
image-classification
|
timm
|
# `dpn92` from `rwightman/pytorch-image-models`
From [`rwightman/pytorch-image-models`](https://github.com/rwightman/pytorch-image-models):
```
""" PyTorch implementation of DualPathNetworks
Based on original MXNet implementation https://github.com/cypw/DPNs with
many ideas from another PyTorch implementation https://github.com/oyam/pytorch-DPNs.
This implementation is compatible with the pretrained weights from cypw's MXNet implementation.
Hacked together by / Copyright 2020 Ross Wightman
"""
```
## Model description
[Dual Path Networks](https://arxiv.org/abs/1707.01629)
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in `timm`:
```python
import PIL
import timm
import torch
model = timm.create_model("julien-c/timm-dpn92")
img = PIL.Image.open(path_to_an_image)
img = img.convert("RGB")
config = model.default_cfg
if isinstance(config["input_size"], tuple):
img_size = config["input_size"][-2:]
else:
img_size = config["input_size"]
transform = timm.data.transforms_factory.transforms_imagenet_eval(
img_size=img_size,
interpolation=config["interpolation"],
mean=config["mean"],
std=config["std"],
)
input_tensor = transform(cat_img)
input_tensor = input_tensor.unsqueeze(0)
# ^ batch size = 1
with torch.no_grad():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
[this paper](https://arxiv.org/abs/1906.02659) for examples).
More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
To be completed
### Preprocessing
To be completed
## Evaluation results
To be completed
### BibTeX entry and citation info
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
and
```bibtex
@misc{chen2017dual,
title={Dual Path Networks},
author={Yunpeng Chen and Jianan Li and Huaxin Xiao and Xiaojie Jin and Shuicheng Yan and Jiashi Feng},
year={2017},
eprint={1707.01629},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
{"license": "apache-2.0", "tags": ["image-classification", "timm", "dpn"], "datasets": ["imagenet"]}
|
julien-c/timm-dpn92
| null |
[
"timm",
"pytorch",
"image-classification",
"dpn",
"dataset:imagenet",
"arxiv:1707.01629",
"arxiv:1906.02659",
"arxiv:2010.15052",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1707.01629",
"1906.02659",
"2010.15052"
] |
[] |
TAGS
#timm #pytorch #image-classification #dpn #dataset-imagenet #arxiv-1707.01629 #arxiv-1906.02659 #arxiv-2010.15052 #license-apache-2.0 #region-us
|
# 'dpn92' from 'rwightman/pytorch-image-models'
From 'rwightman/pytorch-image-models':
## Model description
Dual Path Networks
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in 'timm':
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
this paper for examples).
More generally, recent research has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on ImageNet, a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
To be completed
### Preprocessing
To be completed
## Evaluation results
To be completed
### BibTeX entry and citation info
and
|
[
"# 'dpn92' from 'rwightman/pytorch-image-models'\n\nFrom 'rwightman/pytorch-image-models':",
"## Model description\n\nDual Path Networks",
"## Intended uses & limitations\n\nYou can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head\nto fine-tune it on a downstream task (another classification task with different labels, image segmentation or\nobject detection, to name a few).",
"### How to use\n\nYou can use this model with the usual factory method in 'timm':",
"### Limitations and bias\n\nThe training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will\nprobably not generalize well on drawings or images containing multiple objects with different labels.\n\nThe training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or\nmodels created by fine-tuning this model will work better on images picturing scenes from these countries (see \nthis paper for examples).\n\nMore generally, recent research has shown that even models trained in an\nunsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in\nthe training images.",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 millions of\nhand-annotated images with 1,000 categories.",
"## Training procedure\n\nTo be completed",
"### Preprocessing\n\nTo be completed",
"## Evaluation results\n\nTo be completed",
"### BibTeX entry and citation info\n\n\n\nand"
] |
[
"TAGS\n#timm #pytorch #image-classification #dpn #dataset-imagenet #arxiv-1707.01629 #arxiv-1906.02659 #arxiv-2010.15052 #license-apache-2.0 #region-us \n",
"# 'dpn92' from 'rwightman/pytorch-image-models'\n\nFrom 'rwightman/pytorch-image-models':",
"## Model description\n\nDual Path Networks",
"## Intended uses & limitations\n\nYou can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head\nto fine-tune it on a downstream task (another classification task with different labels, image segmentation or\nobject detection, to name a few).",
"### How to use\n\nYou can use this model with the usual factory method in 'timm':",
"### Limitations and bias\n\nThe training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will\nprobably not generalize well on drawings or images containing multiple objects with different labels.\n\nThe training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or\nmodels created by fine-tuning this model will work better on images picturing scenes from these countries (see \nthis paper for examples).\n\nMore generally, recent research has shown that even models trained in an\nunsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in\nthe training images.",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 millions of\nhand-annotated images with 1,000 categories.",
"## Training procedure\n\nTo be completed",
"### Preprocessing\n\nTo be completed",
"## Evaluation results\n\nTo be completed",
"### BibTeX entry and citation info\n\n\n\nand"
] |
voice-activity-detection
| null |
## Example pyannote-audio Voice Activity Detection model
### `pyannote.audio.models.segmentation.PyanNet`
♻️ Imported from https://github.com/pyannote/pyannote-audio-hub
This model was trained by @hbredin.
### Demo: How to use in pyannote-audio
```python
from pyannote.audio.core.inference import Inference
model = Inference('julien-c/voice-activity-detection', device='cuda')
model({
"audio": "TheBigBangTheory.wav"
})
```
### Citing pyannote-audio
```BibTex
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}
```
or
```bibtex
@inproceedings{Lavechin2020,
author = {Marvin Lavechin and Marie-Philippe Gill and Ruben Bousbib and Herv\'{e} Bredin and Leibny Paola Garcia-Perera},
title = {{End-to-end Domain-Adversarial Voice Activity Detection}},
year = {2020},
url = {https://arxiv.org/abs/1910.10655},
}
```
|
{"license": "mit", "tags": ["pyannote", "audio", "voice-activity-detection"], "datasets": ["dihard"], "inference": false}
|
julien-c/voice-activity-detection
| null |
[
"pytorch",
"pyannote",
"audio",
"voice-activity-detection",
"dataset:dihard",
"arxiv:1910.10655",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.10655"
] |
[] |
TAGS
#pytorch #pyannote #audio #voice-activity-detection #dataset-dihard #arxiv-1910.10655 #license-mit #region-us
|
## Example pyannote-audio Voice Activity Detection model
### 'URL.segmentation.PyanNet'
️ Imported from URL
This model was trained by @hbredin.
### Demo: How to use in pyannote-audio
### Citing pyannote-audio
or
|
[
"## Example pyannote-audio Voice Activity Detection model",
"### 'URL.segmentation.PyanNet'\n\n️ Imported from URL\n\nThis model was trained by @hbredin.",
"### Demo: How to use in pyannote-audio",
"### Citing pyannote-audio\n\n\n\nor"
] |
[
"TAGS\n#pytorch #pyannote #audio #voice-activity-detection #dataset-dihard #arxiv-1910.10655 #license-mit #region-us \n",
"## Example pyannote-audio Voice Activity Detection model",
"### 'URL.segmentation.PyanNet'\n\n️ Imported from URL\n\nThis model was trained by @hbredin.",
"### Demo: How to use in pyannote-audio",
"### Citing pyannote-audio\n\n\n\nor"
] |
tabular-classification
|
sklearn
|
## Wine Quality classification
### A Simple Example of Scikit-learn Pipeline
> Inspired by https://towardsdatascience.com/a-simple-example-of-pipeline-in-machine-learning-with-scikit-learn-e726ffbb6976 by Saptashwa Bhattacharyya
### How to use
```python
from huggingface_hub import hf_hub_url, cached_download
import joblib
import pandas as pd
REPO_ID = "julien-c/wine-quality"
FILENAME = "sklearn_model.joblib"
model = joblib.load(cached_download(
hf_hub_url(REPO_ID, FILENAME)
))
# model is a `sklearn.pipeline.Pipeline`
```
#### Get sample data from this repo
```python
data_file = cached_download(
hf_hub_url(REPO_ID, "winequality-red.csv")
)
winedf = pd.read_csv(data_file, sep=";")
X = winedf.drop(["quality"], axis=1)
Y = winedf["quality"]
print(X[:3])
```
| | fixed acidity | volatile acidity | citric acid | residual sugar | chlorides | free sulfur dioxide | total sulfur dioxide | density | pH | sulphates | alcohol |
|---:|----------------:|-------------------:|--------------:|-----------------:|------------:|----------------------:|-----------------------:|----------:|-----:|------------:|----------:|
| 0 | 7.4 | 0.7 | 0 | 1.9 | 0.076 | 11 | 34 | 0.9978 | 3.51 | 0.56 | 9.4 |
| 1 | 7.8 | 0.88 | 0 | 2.6 | 0.098 | 25 | 67 | 0.9968 | 3.2 | 0.68 | 9.8 |
| 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15 | 54 | 0.997 | 3.26 | 0.65 | 9.8 |
#### Get your prediction
```python
labels = model.predict(X[:3])
# [5, 5, 5]
```
#### Eval
```python
model.score(X, Y)
# 0.6616635397123202
```
### 🍷 Disclaimer
No red wine was drunk (unfortunately) while training this model 🍷
|
{"tags": ["tabular-classification", "sklearn"], "datasets": ["wine-quality", "lvwerra/red-wine"], "widget": [{"structuredData": {"fixed_acidity": [7.4, 7.8, 10.3], "volatile_acidity": [0.7, 0.88, 0.32], "citric_acid": [0, 0, 0.45], "residual_sugar": [1.9, 2.6, 6.4], "chlorides": [0.076, 0.098, 0.073], "free_sulfur_dioxide": [11, 25, 5], "total_sulfur_dioxide": [34, 67, 13], "density": [0.9978, 0.9968, 0.9976], "pH": [3.51, 3.2, 3.23], "sulphates": [0.56, 0.68, 0.82], "alcohol": [9.4, 9.8, 12.6]}}]}
|
julien-c/wine-quality
| null |
[
"sklearn",
"joblib",
"tabular-classification",
"dataset:wine-quality",
"dataset:lvwerra/red-wine",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sklearn #joblib #tabular-classification #dataset-wine-quality #dataset-lvwerra/red-wine #has_space #region-us
|
Wine Quality classification
---------------------------
### A Simple Example of Scikit-learn Pipeline
>
> Inspired by URL by Saptashwa Bhattacharyya
>
>
>
### How to use
#### Get sample data from this repo
#### Get your prediction
#### Eval
### Disclaimer
No red wine was drunk (unfortunately) while training this model
|
[
"### A Simple Example of Scikit-learn Pipeline\n\n\n\n> \n> Inspired by URL by Saptashwa Bhattacharyya\n> \n> \n>",
"### How to use",
"#### Get sample data from this repo",
"#### Get your prediction",
"#### Eval",
"### Disclaimer\n\n\nNo red wine was drunk (unfortunately) while training this model"
] |
[
"TAGS\n#sklearn #joblib #tabular-classification #dataset-wine-quality #dataset-lvwerra/red-wine #has_space #region-us \n",
"### A Simple Example of Scikit-learn Pipeline\n\n\n\n> \n> Inspired by URL by Saptashwa Bhattacharyya\n> \n> \n>",
"### How to use",
"#### Get sample data from this repo",
"#### Get your prediction",
"#### Eval",
"### Disclaimer\n\n\nNo red wine was drunk (unfortunately) while training this model"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622767
## Validation Metrics
- Loss: 0.20029613375663757
- Accuracy: 0.9256
- Precision: 0.9090909090909091
- Recall: 0.9466984884645983
- AUC: 0.979257749523025
- F1: 0.9275136399064692
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622767
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622767", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["juliensimon/autonlp-data-imdb-demo-hf"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
juliensimon/autonlp-imdb-demo-hf-16622767
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-imdb-demo-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-imdb-demo-hf #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622767
## Validation Metrics
- Loss: 0.20029613375663757
- Accuracy: 0.9256
- Precision: 0.9090909090909091
- Recall: 0.9466984884645983
- AUC: 0.979257749523025
- F1: 0.9275136399064692
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16622767",
"## Validation Metrics\n\n- Loss: 0.20029613375663757\n- Accuracy: 0.9256\n- Precision: 0.9090909090909091\n- Recall: 0.9466984884645983\n- AUC: 0.979257749523025\n- F1: 0.9275136399064692",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-imdb-demo-hf #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16622767",
"## Validation Metrics\n\n- Loss: 0.20029613375663757\n- Accuracy: 0.9256\n- Precision: 0.9090909090909091\n- Recall: 0.9466984884645983\n- AUC: 0.979257749523025\n- F1: 0.9275136399064692",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622775
## Validation Metrics
- Loss: 0.18653589487075806
- Accuracy: 0.9408
- Precision: 0.9537643207855974
- Recall: 0.9272076372315036
- AUC: 0.985847396174344
- F1: 0.9402985074626865
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["juliensimon/autonlp-data-imdb-demo-hf"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
juliensimon/autonlp-imdb-demo-hf-16622775
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-imdb-demo-hf",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-imdb-demo-hf #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16622775
## Validation Metrics
- Loss: 0.18653589487075806
- Accuracy: 0.9408
- Precision: 0.9537643207855974
- Recall: 0.9272076372315036
- AUC: 0.985847396174344
- F1: 0.9402985074626865
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16622775",
"## Validation Metrics\n\n- Loss: 0.18653589487075806\n- Accuracy: 0.9408\n- Precision: 0.9537643207855974\n- Recall: 0.9272076372315036\n- AUC: 0.985847396174344\n- F1: 0.9402985074626865",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-imdb-demo-hf #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 16622775",
"## Validation Metrics\n\n- Loss: 0.18653589487075806\n- Accuracy: 0.9408\n- Precision: 0.9537643207855974\n- Recall: 0.9272076372315036\n- AUC: 0.985847396174344\n- F1: 0.9402985074626865",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 31447312
- CO2 Emissions (in grams): 206.46626351359515
## Validation Metrics
- Loss: 1.1907752752304077
- Rouge1: 55.9215
- Rouge2: 30.7724
- RougeL: 53.185
- RougeLsum: 53.3353
- Gen Len: 15.1236
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/juliensimon/autonlp-reuters-summarization-31447312
```
|
{"language": "en", "tags": "autonlp", "datasets": ["juliensimon/autonlp-data-reuters-summarization"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 206.46626351359515}
|
juliensimon/autonlp-reuters-summarization-31447312
| null |
[
"transformers",
"pytorch",
"safetensors",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #pegasus #text2text-generation #autonlp #en #dataset-juliensimon/autonlp-data-reuters-summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 31447312
- CO2 Emissions (in grams): 206.46626351359515
## Validation Metrics
- Loss: 1.1907752752304077
- Rouge1: 55.9215
- Rouge2: 30.7724
- RougeL: 53.185
- RougeLsum: 53.3353
- Gen Len: 15.1236
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 31447312\n- CO2 Emissions (in grams): 206.46626351359515",
"## Validation Metrics\n\n- Loss: 1.1907752752304077\n- Rouge1: 55.9215\n- Rouge2: 30.7724\n- RougeL: 53.185\n- RougeLsum: 53.3353\n- Gen Len: 15.1236",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #pegasus #text2text-generation #autonlp #en #dataset-juliensimon/autonlp-data-reuters-summarization #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 31447312\n- CO2 Emissions (in grams): 206.46626351359515",
"## Validation Metrics\n\n- Loss: 1.1907752752304077\n- Rouge1: 55.9215\n- Rouge2: 30.7724\n- RougeL: 53.185\n- RougeLsum: 53.3353\n- Gen Len: 15.1236",
"## Usage\n\nYou can use cURL to access this model:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 18753417
- CO2 Emissions (in grams): 112.75546781635975
## Validation Metrics
- Loss: 0.9065971970558167
- Accuracy: 0.6680274633512711
- Macro F1: 0.5384854358272774
- Micro F1: 0.6680274633512711
- Weighted F1: 0.6414749238882866
- Macro Precision: 0.6744495173269196
- Micro Precision: 0.6680274633512711
- Weighted Precision: 0.6634090047492259
- Macro Recall: 0.5078466493896978
- Micro Recall: 0.6680274633512711
- Weighted Recall: 0.6680274633512711
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-song-lyrics-18753417
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-song-lyrics-18753417", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-song-lyrics-18753417", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": ["autonlp"], "datasets": ["juliensimon/autonlp-data-song-lyrics"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 112.75546781635975}
|
juliensimon/autonlp-song-lyrics-18753417
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-song-lyrics",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-song-lyrics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 18753417
- CO2 Emissions (in grams): 112.75546781635975
## Validation Metrics
- Loss: 0.9065971970558167
- Accuracy: 0.6680274633512711
- Macro F1: 0.5384854358272774
- Micro F1: 0.6680274633512711
- Weighted F1: 0.6414749238882866
- Macro Precision: 0.6744495173269196
- Micro Precision: 0.6680274633512711
- Weighted Precision: 0.6634090047492259
- Macro Recall: 0.5078466493896978
- Micro Recall: 0.6680274633512711
- Weighted Recall: 0.6680274633512711
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 18753417\n- CO2 Emissions (in grams): 112.75546781635975",
"## Validation Metrics\n\n- Loss: 0.9065971970558167\n- Accuracy: 0.6680274633512711\n- Macro F1: 0.5384854358272774\n- Micro F1: 0.6680274633512711\n- Weighted F1: 0.6414749238882866\n- Macro Precision: 0.6744495173269196\n- Micro Precision: 0.6680274633512711\n- Weighted Precision: 0.6634090047492259\n- Macro Recall: 0.5078466493896978\n- Micro Recall: 0.6680274633512711\n- Weighted Recall: 0.6680274633512711",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-song-lyrics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 18753417\n- CO2 Emissions (in grams): 112.75546781635975",
"## Validation Metrics\n\n- Loss: 0.9065971970558167\n- Accuracy: 0.6680274633512711\n- Macro F1: 0.5384854358272774\n- Micro F1: 0.6680274633512711\n- Weighted F1: 0.6414749238882866\n- Macro Precision: 0.6744495173269196\n- Micro Precision: 0.6680274633512711\n- Weighted Precision: 0.6634090047492259\n- Macro Recall: 0.5078466493896978\n- Micro Recall: 0.6680274633512711\n- Weighted Recall: 0.6680274633512711",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 18753423
- CO2 Emissions (in grams): 55.552987716859484
## Validation Metrics
- Loss: 0.913820743560791
- Accuracy: 0.654110224531453
- Macro F1: 0.5327761649415296
- Micro F1: 0.654110224531453
- Weighted F1: 0.6339481529454227
- Macro Precision: 0.6799297267808116
- Micro Precision: 0.654110224531453
- Weighted Precision: 0.6533459269990771
- Macro Recall: 0.49907494605289154
- Micro Recall: 0.654110224531453
- Weighted Recall: 0.654110224531453
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-song-lyrics-18753423
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-song-lyrics-18753423", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-song-lyrics-18753423", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["juliensimon/autonlp-data-song-lyrics"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 55.552987716859484}
|
juliensimon/autonlp-song-lyrics-18753423
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-song-lyrics",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-song-lyrics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 18753423
- CO2 Emissions (in grams): 55.552987716859484
## Validation Metrics
- Loss: 0.913820743560791
- Accuracy: 0.654110224531453
- Macro F1: 0.5327761649415296
- Micro F1: 0.654110224531453
- Weighted F1: 0.6339481529454227
- Macro Precision: 0.6799297267808116
- Micro Precision: 0.654110224531453
- Weighted Precision: 0.6533459269990771
- Macro Recall: 0.49907494605289154
- Micro Recall: 0.654110224531453
- Weighted Recall: 0.654110224531453
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 18753423\n- CO2 Emissions (in grams): 55.552987716859484",
"## Validation Metrics\n\n- Loss: 0.913820743560791\n- Accuracy: 0.654110224531453\n- Macro F1: 0.5327761649415296\n- Micro F1: 0.654110224531453\n- Weighted F1: 0.6339481529454227\n- Macro Precision: 0.6799297267808116\n- Micro Precision: 0.654110224531453\n- Weighted Precision: 0.6533459269990771\n- Macro Recall: 0.49907494605289154\n- Micro Recall: 0.654110224531453\n- Weighted Recall: 0.654110224531453",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-juliensimon/autonlp-data-song-lyrics #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 18753423\n- CO2 Emissions (in grams): 55.552987716859484",
"## Validation Metrics\n\n- Loss: 0.913820743560791\n- Accuracy: 0.654110224531453\n- Macro F1: 0.5327761649415296\n- Micro F1: 0.654110224531453\n- Weighted F1: 0.6339481529454227\n- Macro Precision: 0.6799297267808116\n- Micro Precision: 0.654110224531453\n- Weighted Precision: 0.6533459269990771\n- Macro Recall: 0.49907494605289154\n- Micro Recall: 0.654110224531453\n- Weighted Recall: 0.654110224531453",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
Distilbert model fine-tuned on English language product reviews
A notebook for Amazon SageMaker is available in the 'code' subfolder.
|
{"language": ["en"], "tags": ["distilbert", "sentiment-analysis"], "datasets": ["generated_reviews_enth"]}
|
juliensimon/reviews-sentiment-analysis
| null |
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"en",
"dataset:generated_reviews_enth",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #distilbert #text-classification #sentiment-analysis #en #dataset-generated_reviews_enth #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Distilbert model fine-tuned on English language product reviews
A notebook for Amazon SageMaker is available in the 'code' subfolder.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #distilbert #text-classification #sentiment-analysis #en #dataset-generated_reviews_enth #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-squad-finetuned-covbiobert
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-squad) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 486 | 0.3787 |
| 0.161 | 2.0 | 972 | 0.3959 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "biobert-base-cased-v1.1-squad-finetuned-covbiobert", "results": []}]}
|
juliusco/biobert-base-cased-v1.1-squad-finetuned-covbiobert
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
biobert-base-cased-v1.1-squad-finetuned-covbiobert
==================================================
This model is a fine-tuned version of dmis-lab/biobert-base-cased-v1.1-squad on the covid\_qa\_deepset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3959
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu102
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-squad-finetuned-covdrobert
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-squad) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 486 | 0.3787 |
| 0.161 | 2.0 | 972 | 0.3959 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "biobert-base-cased-v1.1-squad-finetuned-covdrobert", "results": []}]}
|
juliusco/biobert-base-cased-v1.1-squad-finetuned-covdrobert
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
biobert-base-cased-v1.1-squad-finetuned-covdrobert
==================================================
This model is a fine-tuned version of dmis-lab/biobert-base-cased-v1.1-squad on the covid\_qa\_deepset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3959
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu102
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-covdistilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 457 | 0.5125 |
| 0.5146 | 2.0 | 914 | 0.4843 |
| 0.2158 | 3.0 | 1371 | 0.4492 |
| 0.1639 | 4.0 | 1828 | 0.4760 |
| 0.1371 | 5.0 | 2285 | 0.4844 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "distilbert-base-uncased-finetuned-covdistilbert", "results": []}]}
|
juliusco/distilbert-base-uncased-finetuned-covdistilbert
| null |
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-covdistilbert
===============================================
This model is a fine-tuned version of distilbert-base-uncased on the covid\_qa\_deepset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu102
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu102\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1755 | 1.0 | 11066 | 1.1177 |
| 0.9004 | 2.0 | 22132 | 1.1589 |
| 0.6592 | 3.0 | 33198 | 1.2326 |
| 0.4823 | 4.0 | 44264 | 1.3672 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
juliusco/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3672
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.19.4
* Pytorch 1.11.0+cu113
* Datasets 2.2.2
* Tokenizers 0.12.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.19.4\n* Pytorch 1.11.0+cu113\n* Datasets 2.2.2\n* Tokenizers 0.12.1"
] |
[
"TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.19.4\n* Pytorch 1.11.0+cu113\n* Datasets 2.2.2\n* Tokenizers 0.12.1"
] |
fill-mask
|
transformers
|
# https://github.com/JunnYu/ChineseBert_pytorch
# ChineseBert_pytorch
本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。
```python
pretrained_tokenizer_name = "junnyu/ChineseBERT-base"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_tokenizer_name)
```
# Paper
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/pdf/2106.16038.pdf)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
# Install
```bash
pip install chinesebert
or
pip install git+https://github.com/JunnYu/ChineseBert_pytorch.git
```
# Usage
```python
import torch
from chinesebert import ChineseBertForMaskedLM, ChineseBertTokenizerFast, ChineseBertConfig
pretrained_model_name = "junnyu/ChineseBERT-base"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_model_name)
chinese_bert = ChineseBertForMaskedLM.from_pretrained(pretrained_model_name)
text = "北京是[MASK]国的首都。"
inputs = tokenizer(text, return_tensors="pt")
print(inputs)
maskpos = 4
with torch.no_grad():
o = chinese_bert(**inputs)
value, index = o.logits.softmax(-1)[0, maskpos].topk(10)
pred_tokens = tokenizer.convert_ids_to_tokens(index.tolist())
pred_values = value.tolist()
outputs = []
for t, p in zip(pred_tokens, pred_values):
outputs.append(f"{t}|{round(p,4)}")
print(outputs)
# base ['中|0.711', '我|0.2488', '祖|0.016', '法|0.0057', '美|0.0048', '全|0.0042', '韩|0.0015', '英|0.0011', '两|0.0008', '王|0.0006']
# large ['中|0.8341', '我|0.1479', '祖|0.0157', '全|0.0007', '国|0.0005', '帝|0.0001', '该|0.0001', '法|0.0001', '一|0.0001', '咱|0.0001']
```
# Reference
https://github.com/ShannonAI/ChineseBert
|
{"language": "zh", "tags": ["glycebert"], "inference": false}
|
junnyu/ChineseBERT-base
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"glycebert",
"zh",
"arxiv:2106.16038",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16038"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #bert #fill-mask #glycebert #zh #arxiv-2106.16038 #autotrain_compatible #region-us
|
# URL
# ChineseBert_pytorch
本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。
# Paper
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
# Install
# Usage
# Reference
URL
|
[
"# URL",
"# ChineseBert_pytorch\n本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。",
"# Paper\nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*",
"# Install",
"# Usage",
"# Reference\nURL"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #glycebert #zh #arxiv-2106.16038 #autotrain_compatible #region-us \n",
"# URL",
"# ChineseBert_pytorch\n本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。",
"# Paper\nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*",
"# Install",
"# Usage",
"# Reference\nURL"
] |
fill-mask
|
transformers
|
# https://github.com/JunnYu/ChineseBert_pytorch
# ChineseBert_pytorch
本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。
```python
pretrained_tokenizer_name = "junnyu/ChineseBERT-large"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_tokenizer_name)
```
# Paper
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/pdf/2106.16038.pdf)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
# Install
```bash
pip install chinesebert
or
pip install git+https://github.com/JunnYu/ChineseBert_pytorch.git
```
# Usage
```python
import torch
from chinesebert import ChineseBertForMaskedLM, ChineseBertTokenizerFast, ChineseBertConfig
pretrained_model_name = "junnyu/ChineseBERT-large"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_model_name )
chinese_bert = ChineseBertForMaskedLM.from_pretrained(pretrained_model_name)
text = "北京是[MASK]国的首都。"
inputs = tokenizer(text, return_tensors="pt")
print(inputs)
maskpos = 4
with torch.no_grad():
o = chinese_bert(**inputs)
value, index = o.logits.softmax(-1)[0, maskpos].topk(10)
pred_tokens = tokenizer.convert_ids_to_tokens(index.tolist())
pred_values = value.tolist()
outputs = []
for t, p in zip(pred_tokens, pred_values):
outputs.append(f"{t}|{round(p,4)}")
print(outputs)
# base ['中|0.711', '我|0.2488', '祖|0.016', '法|0.0057', '美|0.0048', '全|0.0042', '韩|0.0015', '英|0.0011', '两|0.0008', '王|0.0006']
# large ['中|0.8341', '我|0.1479', '祖|0.0157', '全|0.0007', '国|0.0005', '帝|0.0001', '该|0.0001', '法|0.0001', '一|0.0001', '咱|0.0001']
```
# Reference
https://github.com/ShannonAI/ChineseBert
|
{"language": "zh", "tags": ["glycebert"], "inference": false}
|
junnyu/ChineseBERT-large
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"glycebert",
"zh",
"arxiv:2106.16038",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.16038"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #bert #fill-mask #glycebert #zh #arxiv-2106.16038 #autotrain_compatible #region-us
|
# URL
# ChineseBert_pytorch
本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。
# Paper
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
# Install
# Usage
# Reference
URL
|
[
"# URL",
"# ChineseBert_pytorch\n本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。",
"# Paper\nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*",
"# Install",
"# Usage",
"# Reference\nURL"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #glycebert #zh #arxiv-2106.16038 #autotrain_compatible #region-us \n",
"# URL",
"# ChineseBert_pytorch\n本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。",
"# Paper\nChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information \n*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*",
"# Install",
"# Usage",
"# Reference\nURL"
] |
fill-mask
|
transformers
|
https://github.com/alibaba-research/ChineseBLUE
|
{}
|
junnyu/bert_chinese_mc_base
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/PaddlePaddle/Research/tree/master/KG/eHealth
|
{}
|
junnyu/eHealth_pytorch
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 一、 个人在openwebtext数据集上训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|Metrics|MCC|Acc|Acc|Spearman|Acc|Acc|Acc|Acc||
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 62.5W
- GPU RTX3090
- 训练时间总共耗费2.5天
# 四、 使用
```python
import torch
from transformers.models.electra import ElectraModel, ElectraTokenizer
tokenizer = ElectraTokenizer.from_pretrained("junnyu/electra_small_discriminator")
model = ElectraModel.from_pretrained("junnyu/electra_small_discriminator")
inputs = tokenizer("Beijing is the capital of China.", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs[0].shape)
```
|
{"language": "en", "license": "mit", "tags": ["pytorch", "electra"], "datasets": ["openwebtext"], "thumbnail": "https://github.com/junnyu"}
|
junnyu/electra_small_discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"en",
"dataset:openwebtext",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #electra #pretraining #en #dataset-openwebtext #license-mit #endpoints_compatible #region-us
|
一、 个人在openwebtext数据集上训练得到的electra-small模型
=========================================
二、 复现结果(dev dataset)
====================
三、 训练细节
=======
* 数据集 openwebtext
* 训练batch\_size 256
* 学习率lr 5e-4
* 最大句子长度max\_seqlen 128
* 训练total step 62.5W
* GPU RTX3090
* 训练时间总共耗费2.5天
四、 使用
=====
|
[] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #en #dataset-openwebtext #license-mit #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# 一、 个人在openwebtext数据集上训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 62.5W
- GPU RTX3090
- 训练时间总共耗费2.5天
# 四、 使用
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="junnyu/electra_small_generator",
tokenizer="junnyu/electra_small_generator"
)
print(
fill_mask("HuggingFace is creating a [MASK] that the community uses to solve NLP tasks.")
)
```
|
{"language": "en", "license": "mit", "tags": ["pytorch", "electra", "masked-lm"], "datasets": ["openwebtext"], "thumbnail": "https://github.com/junnyu"}
|
junnyu/electra_small_generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"masked-lm",
"en",
"dataset:openwebtext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #electra #fill-mask #masked-lm #en #dataset-openwebtext #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
一、 个人在openwebtext数据集上训练得到的electra-small模型
=========================================
二、 复现结果(dev dataset)
====================
三、 训练细节
=======
* 数据集 openwebtext
* 训练batch\_size 256
* 学习率lr 5e-4
* 最大句子长度max\_seqlen 128
* 训练total step 62.5W
* GPU RTX3090
* 训练时间总共耗费2.5天
四、 使用
=====
|
[] |
[
"TAGS\n#transformers #pytorch #electra #fill-mask #masked-lm #en #dataset-openwebtext #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
## 介绍
Pretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.
在13g的clue corpus small数据集上进行的预训练,使用了`Whole Mask LM` 和 `SOP` 任务
训练逻辑参考了这里。https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/ernie-1.0
## 训练细节:
- paddlepaddle+paddlenlp
- V100 x 4
- batch size 256
- max_seq_len 512
- max_lr 0.0001
- min_lr 0.00001
- weight_decay 0.01
- grad_clip 1.0
- 总共训练的句子```128*30w + 256*15w + 256*14.5w + 256*46.5w + 256*17w = 27648w```
- 约等于512 batch size, 100w步条件下的54%
最终loss:
```python
[2022-02-05 16:05:59,067] [ INFO] - global step 170100, loss: 2.651634932, lm_loss: 2.603405, sop_loss: 0.048229, speed: 1.06 steps/s, ips: 271.68 seqs/s, learning rate: 6.66465e-05, loss_scaling: 137438.96875, num_good_steps: 356, num_bad_steps: 0
[2022-02-05 16:07:28,227] [ INFO] - global step 170200, loss: 2.822231531, lm_loss: 2.662831, sop_loss: 0.159401, speed: 1.12 steps/s, ips: 287.13 seqs/s, learning rate: 6.66263e-05, loss_scaling: 137438.96875, num_good_steps: 59, num_bad_steps: 0
[2022-02-05 16:08:57,346] [ INFO] - global step 170300, loss: 2.710968971, lm_loss: 2.673646, sop_loss: 0.037323, speed: 1.12 steps/s, ips: 287.26 seqs/s, learning rate: 6.66061e-05, loss_scaling: 137438.96875, num_good_steps: 159, num_bad_steps: 0
[2022-02-05 16:10:26,698] [ INFO] - global step 170400, loss: 2.867662907, lm_loss: 2.619032, sop_loss: 0.248631, speed: 1.12 steps/s, ips: 286.51 seqs/s, learning rate: 6.65859e-05, loss_scaling: 137438.96875, num_good_steps: 259, num_bad_steps: 0
[2022-02-05 16:11:55,714] [ INFO] - global step 170500, loss: 3.158756495, lm_loss: 2.953678, sop_loss: 0.205079, speed: 1.12 steps/s, ips: 287.59 seqs/s, learning rate: 6.65657e-05, loss_scaling: 137438.96875, num_good_steps: 359, num_bad_steps: 0
[2022-02-05 16:13:24,869] [ INFO] - global step 170600, loss: 2.860815048, lm_loss: 2.754750, sop_loss: 0.106064, speed: 1.12 steps/s, ips: 287.14 seqs/s, learning rate: 6.65455e-05, loss_scaling: 137438.96875, num_good_steps: 33, num_bad_steps: 0
```
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, BertTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||人||气||阳||雨]很好,我[想||就||要||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0", "paddlepaddle"], "widget": [{"text": "\u4eca\u5929[MASK]\u5f88\u597d\uff0c\u6211\u60f3\u53bb\u516c\u56ed\u73a9\uff01"}]}
|
junnyu/roformer_base_wwm_cluecorpussmall
| null |
[
"transformers",
"pytorch",
"roformer",
"fill-mask",
"tf2.0",
"paddlepaddle",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #roformer #fill-mask #tf2.0 #paddlepaddle #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #region-us
|
## 介绍
Pretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.
在13g的clue corpus small数据集上进行的预训练,使用了'Whole Mask LM' 和 'SOP' 任务
训练逻辑参考了这里。URL
## 训练细节:
- paddlepaddle+paddlenlp
- V100 x 4
- batch size 256
- max_seq_len 512
- max_lr 0.0001
- min_lr 0.00001
- weight_decay 0.01
- grad_clip 1.0
- 总共训练的句子
- 约等于512 batch size, 100w步条件下的54%
最终loss:
### tf版本
URL
### pytorch版本+tf2.0版本
URL
## pytorch使用
## 引用
Bibtex:
|
[
"## 介绍\nPretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.\n在13g的clue corpus small数据集上进行的预训练,使用了'Whole Mask LM' 和 'SOP' 任务\n\n训练逻辑参考了这里。URL",
"## 训练细节:\n- paddlepaddle+paddlenlp\n- V100 x 4\n- batch size 256\n- max_seq_len 512 \n- max_lr 0.0001\n- min_lr 0.00001\n- weight_decay 0.01\n- grad_clip 1.0\n- 总共训练的句子\n- 约等于512 batch size, 100w步条件下的54%\n\n最终loss:",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#transformers #pytorch #roformer #fill-mask #tf2.0 #paddlepaddle #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #region-us \n",
"## 介绍\nPretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.\n在13g的clue corpus small数据集上进行的预训练,使用了'Whole Mask LM' 和 'SOP' 任务\n\n训练逻辑参考了这里。URL",
"## 训练细节:\n- paddlepaddle+paddlenlp\n- V100 x 4\n- batch size 256\n- max_seq_len 512 \n- max_lr 0.0001\n- min_lr 0.00001\n- weight_decay 0.01\n- grad_clip 1.0\n- 总共训练的句子\n- 约等于512 batch size, 100w步条件下的54%\n\n最终loss:",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## 引用\n\nBibtex:"
] |
null |
paddlenlp
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "widget": [{"text": "\u4eca\u5929[MASK]\u5f88\u597d\uff0c\u6211\u60f3\u53bb\u516c\u56ed\u73a9\uff01"}]}
|
junnyu/roformer_chinese_base
| null |
[
"paddlenlp",
"pytorch",
"tf",
"jax",
"paddlepaddle",
"roformer",
"tf2.0",
"zh",
"arxiv:2104.09864",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"zh"
] |
TAGS
#paddlenlp #pytorch #tf #jax #paddlepaddle #roformer #tf2.0 #zh #arxiv-2104.09864 #has_space #region-us
|
## 介绍
### tf版本
URL
### pytorch版本+tf2.0版本
URL
## pytorch使用
## tensorflow2.0使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#paddlenlp #pytorch #tf #jax #paddlepaddle #roformer #tf2.0 #zh #arxiv-2104.09864 #has_space #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
null |
paddlenlp
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "widget": [{"text": "\u4eca\u5929[MASK]\u5f88\u597d\uff0c\u6211\u60f3\u53bb\u516c\u56ed\u73a9\uff01"}]}
|
junnyu/roformer_chinese_char_base
| null |
[
"paddlenlp",
"pytorch",
"tf",
"jax",
"paddlepaddle",
"roformer",
"tf2.0",
"zh",
"arxiv:2104.09864",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"zh"
] |
TAGS
#paddlenlp #pytorch #tf #jax #paddlepaddle #roformer #tf2.0 #zh #arxiv-2104.09864 #has_space #region-us
|
## 介绍
### tf版本
URL
### pytorch版本+tf2.0版本
URL
## pytorch使用
## tensorflow2.0使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#paddlenlp #pytorch #tf #jax #paddlepaddle #roformer #tf2.0 #zh #arxiv-2104.09864 #has_space #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
fill-mask
|
transformers
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "widget": [{"text": "\u4eca\u5929[MASK]\u5f88\u597d\uff0c\u6211\u60f3\u53bb\u516c\u56ed\u73a9\uff01"}]}
|
junnyu/roformer_chinese_char_small
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"tf2.0",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #tf #jax #roformer #fill-mask #tf2.0 #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## 介绍
### tf版本
URL
### pytorch版本+tf2.0版本
URL
## pytorch使用
## tensorflow2.0使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roformer #fill-mask #tf2.0 #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
text-generation
|
transformers
|
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "inference": false}
|
junnyu/roformer_chinese_sim_char_base
| null |
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us
|
# 安装
- pip install roformer==0.4.3
# 使用
|
[
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
[
"TAGS\n#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us \n",
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
text-generation
|
transformers
|
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "inference": false}
|
junnyu/roformer_chinese_sim_char_ft_base
| null |
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #has_space #region-us
|
# 安装
- pip install roformer==0.4.3
# 使用
|
[
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
[
"TAGS\n#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #has_space #region-us \n",
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
text-generation
|
transformers
|
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "inference": false}
|
junnyu/roformer_chinese_sim_char_ft_small
| null |
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us
|
# 安装
- pip install roformer==0.4.3
# 使用
|
[
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
[
"TAGS\n#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us \n",
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
text-generation
|
transformers
|
# 安装
- pip install roformer==0.4.3
# 使用
```python
import torch
import numpy as np
from roformer import RoFormerForCausalLM, RoFormerConfig
from transformers import BertTokenizer
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
pretrained_model = "junnyu/roformer_chinese_sim_char_base"
tokenizer = BertTokenizer.from_pretrained(pretrained_model)
config = RoFormerConfig.from_pretrained(pretrained_model)
config.is_decoder = True
config.eos_token_id = tokenizer.sep_token_id
config.pooler_activation = "linear"
model = RoFormerForCausalLM.from_pretrained(pretrained_model, config=config)
model.to(device)
model.eval()
def gen_synonyms(text, n=100, k=20):
''''含义: 产生sent的n个相似句,然后返回最相似的k个。
做法:用seq2seq生成,并用encoder算相似度并排序。
'''
# 寻找所有相似的句子
r = []
inputs1 = tokenizer(text, return_tensors="pt")
for _ in range(n):
inputs1.to(device)
output = tokenizer.batch_decode(model.generate(**inputs1, top_p=0.95, do_sample=True, max_length=128), skip_special_tokens=True)[0].replace(" ","").replace(text, "") # 去除空格,去除原始text文本。
r.append(output)
# 对相似的句子进行排序
r = [i for i in set(r) if i != text and len(i) > 0]
r = [text] + r
inputs2 = tokenizer(r, padding=True, return_tensors="pt")
with torch.no_grad():
inputs2.to(device)
outputs = model(**inputs2)
Z = outputs.pooler_output.cpu().numpy()
Z /= (Z**2).sum(axis=1, keepdims=True)**0.5
argsort = np.dot(Z[1:], -Z[0]).argsort()
return [r[i + 1] for i in argsort[:k]]
out = gen_synonyms("广州和深圳哪个好?")
print(out)
# ['深圳和广州哪个好?',
# '广州和深圳哪个好',
# '深圳和广州哪个好',
# '深圳和广州哪个比较好。',
# '深圳和广州哪个最好?',
# '深圳和广州哪个比较好',
# '广州和深圳那个比较好',
# '深圳和广州哪个更好?',
# '深圳与广州哪个好',
# '深圳和广州,哪个比较好',
# '广州与深圳比较哪个好',
# '深圳和广州哪里比较好',
# '深圳还是广州比较好?',
# '广州和深圳哪个地方好一些?',
# '广州好还是深圳好?',
# '广州好还是深圳好呢?',
# '广州与深圳哪个地方好点?',
# '深圳好还是广州好',
# '广州好还是深圳好',
# '广州和深圳哪个城市好?']
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "inference": false}
|
junnyu/roformer_chinese_sim_char_small
| null |
[
"transformers",
"pytorch",
"roformer",
"text-generation",
"tf2.0",
"zh",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us
|
# 安装
- pip install roformer==0.4.3
# 使用
|
[
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
[
"TAGS\n#transformers #pytorch #roformer #text-generation #tf2.0 #zh #autotrain_compatible #region-us \n",
"# 安装\n- pip install roformer==0.4.3",
"# 使用"
] |
fill-mask
|
transformers
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_small")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天气||心情||感觉||环境||下午]很好,我[要||想||就||可以||去]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "tags": ["roformer", "pytorch", "tf2.0"], "widget": [{"text": "\u4eca\u5929[MASK]\u5f88\u597d\uff0c\u6211\u60f3\u53bb\u516c\u56ed\u73a9\uff01"}]}
|
junnyu/roformer_chinese_small
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roformer",
"fill-mask",
"tf2.0",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"zh"
] |
TAGS
#transformers #pytorch #tf #jax #roformer #fill-mask #tf2.0 #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## 介绍
### tf版本
URL
### pytorch版本+tf2.0版本
URL
## pytorch使用
## tensorflow2.0使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roformer #fill-mask #tf2.0 #zh #arxiv-2104.09864 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本+tf2.0版本\nURL",
"## pytorch使用",
"## tensorflow2.0使用",
"## 引用\n\nBibtex:"
] |
null | null |
# paddle paddle版本的RoFormer
# 需要安装最新的paddlenlp
`pip install git+https://github.com/PaddlePaddle/PaddleNLP.git`
## 预训练模型转换
预训练模型可以从 huggingface/transformers 转换而来,方法如下(适用于roformer模型,其他模型按情况调整):
1. 从huggingface.co获取roformer模型权重
2. 设置参数运行convert.py代码
3. 例子:
假设我想转换https://huggingface.co/junnyu/roformer_chinese_base 权重
- (1)首先下载 https://huggingface.co/junnyu/roformer_chinese_base/tree/main 中的pytorch_model.bin文件,假设我们存入了`./roformer_chinese_base/pytorch_model.bin`
- (2)运行convert.py
```bash
python convert.py \
--pytorch_checkpoint_path ./roformer_chinese_base/pytorch_model.bin \
--paddle_dump_path ./roformer_chinese_base/model_state.pdparams
```
- (3)最终我们得到了转化好的权重`./roformer_chinese_base/model_state.pdparams`
## 预训练MLM测试
### test_mlm.py
```python
import paddle
import argparse
from paddlenlp.transformers import RoFormerForPretraining, RoFormerTokenizer
def test_mlm(text, model_name):
model = RoFormerForPretraining.from_pretrained(model_name)
model.eval()
tokenizer = RoFormerTokenizer.from_pretrained(model_name)
tokens = ["[CLS]"]
text_list = text.split("[MASK]")
for i,t in enumerate(text_list):
tokens.extend(tokenizer.tokenize(t))
if i==len(text_list)-1:
tokens.extend(["[SEP]"])
else:
tokens.extend(["[MASK]"])
input_ids_list = tokenizer.convert_tokens_to_ids(tokens)
input_ids = paddle.to_tensor([input_ids_list])
with paddle.no_grad():
pd_outputs = model(input_ids)[0][0]
pd_outputs_sentence = "paddle: "
for i, id in enumerate(input_ids_list):
if id == tokenizer.convert_tokens_to_ids(["[MASK]"])[0]:
tokens = tokenizer.convert_ids_to_tokens(pd_outputs[i].topk(5)[1].tolist())
pd_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pd_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(pd_outputs_sentence)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--model_name", default="roformer-chinese-base", type=str, help="Pretrained roformer name or path."
)
parser.add_argument(
"--text", default="今天[MASK]很好,我想去公园玩!", type=str, help="MLM text."
)
args = parser.parse_args()
test_mlm(text=args.text, model_name=args.model_name)
```
### 输出
```bash
python test_mlm.py --model_name roformer-chinese-base --text 今天[MASK]很好,我想去公园玩!
# paddle: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
python test_mlm.py --model_name roformer-chinese-base --text 北京是[MASK]的首都!
# paddle: 北京是[中国||谁||中华人民共和国||我们||中华民族]的首都!
python test_mlm.py --model_name roformer-chinese-char-base --text 今天[MASK]很好,我想去公园玩!
# paddle: 今天[天||气||都||风||人]很好,我想去公园玩!
python test_mlm.py --model_name roformer-chinese-char-base --text 北京是[MASK]的首都!
# paddle: 北京是[谁||我||你||他||国]的首都!
```
|
{}
|
junnyu/roformer_paddle
| null |
[
"paddlepaddle",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#paddlepaddle #region-us
|
# paddle paddle版本的RoFormer
# 需要安装最新的paddlenlp
'pip install git+URL
## 预训练模型转换
预训练模型可以从 huggingface/transformers 转换而来,方法如下(适用于roformer模型,其他模型按情况调整):
1. 从huggingface.co获取roformer模型权重
2. 设置参数运行convert.py代码
3. 例子:
假设我想转换https://URL 权重
- (1)首先下载 URL 中的pytorch_model.bin文件,假设我们存入了'./roformer_chinese_base/pytorch_model.bin'
- (2)运行convert.py
- (3)最终我们得到了转化好的权重'./roformer_chinese_base/model_state.pdparams'
## 预训练MLM测试
### test_mlm.py
### 输出
|
[
"# paddle paddle版本的RoFormer",
"# 需要安装最新的paddlenlp\n'pip install git+URL",
"## 预训练模型转换\n\n预训练模型可以从 huggingface/transformers 转换而来,方法如下(适用于roformer模型,其他模型按情况调整):\n\n1. 从huggingface.co获取roformer模型权重\n2. 设置参数运行convert.py代码\n3. 例子:\n 假设我想转换https://URL 权重\n - (1)首先下载 URL 中的pytorch_model.bin文件,假设我们存入了'./roformer_chinese_base/pytorch_model.bin'\n - (2)运行convert.py\n \n - (3)最终我们得到了转化好的权重'./roformer_chinese_base/model_state.pdparams'",
"## 预训练MLM测试",
"### test_mlm.py",
"### 输出"
] |
[
"TAGS\n#paddlepaddle #region-us \n",
"# paddle paddle版本的RoFormer",
"# 需要安装最新的paddlenlp\n'pip install git+URL",
"## 预训练模型转换\n\n预训练模型可以从 huggingface/transformers 转换而来,方法如下(适用于roformer模型,其他模型按情况调整):\n\n1. 从huggingface.co获取roformer模型权重\n2. 设置参数运行convert.py代码\n3. 例子:\n 假设我想转换https://URL 权重\n - (1)首先下载 URL 中的pytorch_model.bin文件,假设我们存入了'./roformer_chinese_base/pytorch_model.bin'\n - (2)运行convert.py\n \n - (3)最终我们得到了转化好的权重'./roformer_chinese_base/model_state.pdparams'",
"## 预训练MLM测试",
"### test_mlm.py",
"### 输出"
] |
feature-extraction
|
transformers
|
# 一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-RoFormer-Small-OWT (this)**|55.76|90.45|87.3|86.64|89.61|81.17|88.85|62.71|80.31|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 50W
- GPU RTX3090
- 训练时间总共耗费55h
# 四、wandb日志
- [**预训练日志**](https://wandb.ai/junyu/electra_rotary_small_pretrain?workspace=user-junyu)
- [**GLUE微调日志**](https://wandb.ai/junyu/electra_rotary_glue_100?workspace=user-junyu)
# 五、 使用
```python
import torch
from transformers import ElectraTokenizer,RoFormerModel
tokenizer = ElectraTokenizer.from_pretrained("junnyu/roformer_small_discriminator")
model = RoFormerModel.from_pretrained("junnyu/roformer_small_discriminator")
inputs = tokenizer("Beijing is the capital of China.", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs[0].shape)
```
|
{"language": "en", "license": "mit", "tags": ["pytorch", "electra", "roformer", "rotary position embedding"], "datasets": ["openwebtext"], "thumbnail": "https://github.com/junnyu"}
|
junnyu/roformer_small_discriminator
| null |
[
"transformers",
"pytorch",
"roformer",
"feature-extraction",
"electra",
"rotary position embedding",
"en",
"dataset:openwebtext",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roformer #feature-extraction #electra #rotary position embedding #en #dataset-openwebtext #license-mit #endpoints_compatible #has_space #region-us
|
一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型
=====================================================================
二、 复现结果(dev dataset)
====================
三、 训练细节
=======
* 数据集 openwebtext
* 训练batch\_size 256
* 学习率lr 5e-4
* 最大句子长度max\_seqlen 128
* 训练total step 50W
* GPU RTX3090
* 训练时间总共耗费55h
四、wandb日志
=========
* 预训练日志
* GLUE微调日志
五、 使用
=====
|
[] |
[
"TAGS\n#transformers #pytorch #roformer #feature-extraction #electra #rotary position embedding #en #dataset-openwebtext #license-mit #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# 一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型
# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-RoFormer-Small-OWT (this)**|55.76|90.45|87.3|86.64|89.61|81.17|88.85|62.71|80.31|
# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr 5e-4
- 最大句子长度max_seqlen 128
- 训练total step 50W
- GPU RTX3090
- 训练时间总共耗费55h
# 四、wandb日志
- [**预训练日志**](https://wandb.ai/junyu/electra_rotary_small_pretrain?workspace=user-junyu)
- [**GLUE微调日志**](https://wandb.ai/junyu/electra_rotary_glue_100?workspace=user-junyu)
# 五、 使用
```python
import torch
from transformers import ElectraTokenizer,RoFormerForMaskedLM
text = "Beijing is the capital of [MASK]."
tokenizer = ElectraTokenizer.from_pretrained("junnyu/roformer_small_generator")
pt_model = RoFormerForMaskedLM.from_pretrained(
"junnyu/roformer_small_generator")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))+" "
print(pt_outputs_sentence)
# pytorch: beijing is the capital of [china||beijing||taiwan||india||shanghai].
```
|
{"language": "en", "license": "mit", "tags": ["pytorch", "electra", "masked-lm", "rotary position embedding"], "datasets": ["openwebtext"], "thumbnail": "https://github.com/junnyu", "widget": [{"text": "Paris is the [MASK] of France."}]}
|
junnyu/roformer_small_generator
| null |
[
"transformers",
"pytorch",
"roformer",
"fill-mask",
"electra",
"masked-lm",
"rotary position embedding",
"en",
"dataset:openwebtext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roformer #fill-mask #electra #masked-lm #rotary position embedding #en #dataset-openwebtext #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型
=====================================================================
二、 复现结果(dev dataset)
====================
三、 训练细节
=======
* 数据集 openwebtext
* 训练batch\_size 256
* 学习率lr 5e-4
* 最大句子长度max\_seqlen 128
* 训练total step 50W
* GPU RTX3090
* 训练时间总共耗费55h
四、wandb日志
=========
* 预训练日志
* GLUE微调日志
五、 使用
=====
|
[] |
[
"TAGS\n#transformers #pytorch #roformer #fill-mask #electra #masked-lm #rotary position embedding #en #dataset-openwebtext #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
https://github.com/dbiir/UER-py/wiki/Modelzoo 中的
MixedCorpus+BertEncoder(large)+MlmTarget
https://share.weiyun.com/5G90sMJ
Pre-trained on mixed large Chinese corpus. The configuration file is bert_large_config.json
## 引用
```tex
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
|
{"language": "zh", "tags": ["bert", "pytorch"], "widget": [{"text": "\u5df4\u9ece\u662f[MASK]\u56fd\u7684\u9996\u90fd\u3002"}]}
|
junnyu/uer_large
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #bert #fill-mask #zh #autotrain_compatible #endpoints_compatible #region-us
|
URL 中的
MixedCorpus+BertEncoder(large)+MlmTarget
URL
Pre-trained on mixed large Chinese corpus. The configuration file is bert_large_config.json
## 引用
|
[
"## 引用"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #zh #autotrain_compatible #endpoints_compatible #region-us \n",
"## 引用"
] |
fill-mask
|
transformers
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/WoBERT
### pytorch版本
https://github.com/JunnYu/WoBERT_pytorch
## 安装(主要为了安装WoBertTokenizer)
注意:transformers版本需要>=4.7.0
WoBertTokenizer的实现与RoFormerTokenizer是一样的,因此使用RoFormerTokenizer就可以了
## 使用
```python
import torch
from transformers import BertForMaskedLM as WoBertForMaskedLM
from transformers import RoFormerTokenizer as WoBertTokenizer
pretrained_model_or_path_list = [
"junnyu/wobert_chinese_plus_base", "junnyu/wobert_chinese_base"
]
for path in pretrained_model_or_path_list:
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = WoBertTokenizer.from_pretrained(path)
model = WoBertForMaskedLM.from_pretrained(path)
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits[0]
outputs_sentence = ""
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += "[" + "||".join(tokens) + "]"
else:
outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id],
skip_special_tokens=True))
print(outputs_sentence)
# RoFormer 今天[天气||天||心情||阳光||空气]很好,我[想||要||打算||准备||喜欢]去公园玩。
# PLUS WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||打算||准备||就]去公园玩。
# WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||就||准备||也]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
```
|
{"language": "zh", "tags": ["wobert"]}
|
junnyu/wobert_chinese_base
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"wobert",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #wobert #zh #autotrain_compatible #endpoints_compatible #region-us
|
## 介绍
### tf版本
URL
### pytorch版本
URL
## 安装(主要为了安装WoBertTokenizer)
注意:transformers版本需要>=4.7.0
WoBertTokenizer的实现与RoFormerTokenizer是一样的,因此使用RoFormerTokenizer就可以了
## 使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本 \nURL",
"## 安装(主要为了安装WoBertTokenizer)\n注意:transformers版本需要>=4.7.0\nWoBertTokenizer的实现与RoFormerTokenizer是一样的,因此使用RoFormerTokenizer就可以了",
"## 使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #wobert #zh #autotrain_compatible #endpoints_compatible #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本 \nURL",
"## 安装(主要为了安装WoBertTokenizer)\n注意:transformers版本需要>=4.7.0\nWoBertTokenizer的实现与RoFormerTokenizer是一样的,因此使用RoFormerTokenizer就可以了",
"## 使用",
"## 引用\n\nBibtex:"
] |
fill-mask
|
transformers
|
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/WoBERT
### pytorch版本
https://github.com/JunnYu/WoBERT_pytorch
## 安装(主要为了安装WoBertTokenizer)
```bash
pip install git+https://github.com/JunnYu/WoBERT_pytorch.git
```
## 使用
```python
import torch
from transformers import BertForMaskedLM as WoBertForMaskedLM
from wobert import WoBertTokenizer
pretrained_model_or_path_list = [
"junnyu/wobert_chinese_plus_base", "junnyu/wobert_chinese_base"
]
for path in pretrained_model_or_path_list:
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = WoBertTokenizer.from_pretrained(path)
model = WoBertForMaskedLM.from_pretrained(path)
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits[0]
outputs_sentence = ""
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += "[" + "||".join(tokens) + "]"
else:
outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id],
skip_special_tokens=True))
print(outputs_sentence)
# RoFormer 今天[天气||天||心情||阳光||空气]很好,我[想||要||打算||准备||喜欢]去公园玩。
# PLUS WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||打算||准备||就]去公园玩。
# WoBERT 今天[天气||阳光||天||心情||空气]很好,我[想||要||就||准备||也]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
```
|
{"language": "zh", "tags": ["wobert"], "inference": false}
|
junnyu/wobert_chinese_plus_base
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"wobert",
"zh",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #wobert #zh #autotrain_compatible #has_space #region-us
|
## 介绍
### tf版本
URL
### pytorch版本
URL
## 安装(主要为了安装WoBertTokenizer)
## 使用
## 引用
Bibtex:
|
[
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本 \nURL",
"## 安装(主要为了安装WoBertTokenizer)",
"## 使用",
"## 引用\n\nBibtex:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #wobert #zh #autotrain_compatible #has_space #region-us \n",
"## 介绍",
"### tf版本 \nURL",
"### pytorch版本 \nURL",
"## 安装(主要为了安装WoBertTokenizer)",
"## 使用",
"## 引用\n\nBibtex:"
] |
null | null |
Text Emotion Recognition using RoBERTa-base
|
{}
|
junxtjx/roberta-base_TER
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Text Emotion Recognition using RoBERTa-base
|
[] |
[
"TAGS\n#region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_finetuning_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8284
- F1: 0.8818
- Combined Score: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert_finetuning_test", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8284313725490197, "name": "Accuracy"}, {"type": "f1", "value": 0.8817567567567567, "name": "F1"}]}]}]}
|
junzai/demo
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# bert_finetuning_test
This model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8284
- F1: 0.8818
- Combined Score: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
[
"# bert_finetuning_test\n\nThis model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4023\n- Accuracy: 0.8284\n- F1: 0.8818\n- Combined Score: 0.8551",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert_finetuning_test\n\nThis model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4023\n- Accuracy: 0.8284\n- F1: 0.8818\n- Combined Score: 0.8551",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_finetuning_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8284
- F1: 0.8818
- Combined Score: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert_finetuning_test", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8284313725490197, "name": "Accuracy"}, {"type": "f1", "value": 0.8817567567567567, "name": "F1"}]}]}]}
|
junzai/demotest
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# bert_finetuning_test
This model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4023
- Accuracy: 0.8284
- F1: 0.8818
- Combined Score: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
|
[
"# bert_finetuning_test\n\nThis model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4023\n- Accuracy: 0.8284\n- F1: 0.8818\n- Combined Score: 0.8551",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert_finetuning_test\n\nThis model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4023\n- Accuracy: 0.8284\n- F1: 0.8818\n- Combined Score: 0.8551",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
|
{"language": "en", "license": "mit", "tags": ["go-emotion", "text-classification", "pytorch"], "datasets": ["go_emotions"], "metrics": ["f1"], "widget": [{"text": "Thanks for giving advice to the people who need it! \ud83d\udc4c\ud83d\ude4f"}]}
|
justin871030/bert-base-uncased-goemotions-ekman-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
|
[
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly."
] |
[
"TAGS\n#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us \n",
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly."
] |
text-classification
|
transformers
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 70%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions)
|
{"language": "en", "license": "mit", "tags": ["go-emotion", "text-classification", "pytorch"], "datasets": ["go_emotions"], "metrics": ["f1"], "widget": [{"text": "Thanks for giving advice to the people who need it! \ud83d\udc4c\ud83d\ude4f"}]}
|
justin871030/bert-base-uncased-goemotions-group-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of 'Macro F1' - 70%
## Tutorial Link
- GitHub
|
[
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly.",
"## Results\nBest Result of 'Macro F1' - 70%",
"## Tutorial Link\n- GitHub"
] |
[
"TAGS\n#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us \n",
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly.",
"## Results\nBest Result of 'Macro F1' - 70%",
"## Tutorial Link\n- GitHub"
] |
text-classification
|
transformers
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 53%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions)
|
{"language": "en", "license": "mit", "tags": ["go-emotion", "text-classification", "pytorch"], "datasets": ["go_emotions"], "metrics": ["f1"], "widget": [{"text": "Thanks for giving advice to the people who need it! \ud83d\udc4c\ud83d\ude4f"}]}
|
justin871030/bert-base-uncased-goemotions-original-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us
|
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of 'Macro F1' - 53%
## Tutorial Link
- GitHub
|
[
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly.",
"## Results\nBest Result of 'Macro F1' - 53%",
"## Tutorial Link\n- GitHub"
] |
[
"TAGS\n#transformers #pytorch #bert #go-emotion #text-classification #en #dataset-go_emotions #license-mit #endpoints_compatible #region-us \n",
"## Model Description\n1. Based on the uncased BERT pretrained model with a linear output layer.\n2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.\n3. Did label smoothing while training.\n4. Used weighted loss and focal loss to help the cases which trained badly.",
"## Results\nBest Result of 'Macro F1' - 53%",
"## Tutorial Link\n- GitHub"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
This model is a fine-tuned version of [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) which was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
It achieves the following results on the evaluation set (20% from the dataset randomly shuffled and selected to serve as a test set):
- Validation Loss: 0.267367
- Accuracy: 91.1370%
To use the model, use the inference API.
Alternatively, to run locally
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned")
model = AutoModelForSequenceClassification.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned")
```
## Model description
This model is a fine-tuned version of pretrained version [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets). Click on [this](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) to see how the pre-training was done.
This model was fine-tuned with a dataset of ~5500. A web scraper was used to scrape polifact and a script was used to pull from the google fact check API. Because ~80% of both these datasets were either false or misleading, I pulled about ~1200 tweets from the CDC related to covid and labelled them as true. ~30% of this dataset is considered true and the rest false or misleading. Please see the published datasets above for more detailed information.
The tokenizer requires the emoji library to be installed.
```
!pip install nltk emoji
```
## Intended uses & limitations
The intended use of this model is to detect if the contents of a covid tweet is potentially false or misleading. This model is not an end all be all. It has many limitations. For example, if someone makes a post containing an image, but has attached a satirical image, this model would not be able to distinguish this. If a user links a website, the tokenizer allocates a special token for links, meaning the contents of the linked website is completely lost. If someone tweets a reply, this model can't look at the parent tweets, and will lack context.
This model's dataset relies on the crowd-sourcing annotations being accurate. This data is only accurate of up until early December 2021. For example, it probably wouldn't do very ell with tweets regarded the new omicron variant.
Example true inputs:
```
Covid vaccines are safe and effective. -> 97% true
Vaccinations are safe and help prevent covid. -> 97% true
```
Example false inputs:
```
Covid vaccines will kill you. -> 97% false
covid vaccines make you infertile. -> 97% false
```
## Training and evaluation data
This model was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
-
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.435500 | 1.0 | 0.401900 | 0.906893 |
| 0.309700 | 2.0 | 0.265500 | 0.907789 |
| 0.266200 | 3.0 | 0.216500 | 0.911370 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"model-index": [{"name": "bertweet-covid--vaccine-tweets-finetuned", "results": []}]}
|
justinqbui/bertweet-covid-vaccine-tweets-finetuned
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
==============================================================
This model is a fine-tuned version of justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets which was finetuned by using this google fact check ~3k dataset size and webscraped data from polifact covid info ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
It achieves the following results on the evaluation set (20% from the dataset randomly shuffled and selected to serve as a test set):
* Validation Loss: 0.267367
* Accuracy: 91.1370%
To use the model, use the inference API.
Alternatively, to run locally
Model description
-----------------
This model is a fine-tuned version of pretrained version justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets. Click on this to see how the pre-training was done.
This model was fine-tuned with a dataset of ~5500. A web scraper was used to scrape polifact and a script was used to pull from the google fact check API. Because ~80% of both these datasets were either false or misleading, I pulled about ~1200 tweets from the CDC related to covid and labelled them as true. ~30% of this dataset is considered true and the rest false or misleading. Please see the published datasets above for more detailed information.
The tokenizer requires the emoji library to be installed.
Intended uses & limitations
---------------------------
The intended use of this model is to detect if the contents of a covid tweet is potentially false or misleading. This model is not an end all be all. It has many limitations. For example, if someone makes a post containing an image, but has attached a satirical image, this model would not be able to distinguish this. If a user links a website, the tokenizer allocates a special token for links, meaning the contents of the linked website is completely lost. If someone tweets a reply, this model can't look at the parent tweets, and will lack context.
This model's dataset relies on the crowd-sourcing annotations being accurate. This data is only accurate of up until early December 2021. For example, it probably wouldn't do very ell with tweets regarded the new omicron variant.
Example true inputs:
Example false inputs:
Training and evaluation data
----------------------------
This model was finetuned by using this google fact check ~3k dataset size and webscraped data from polifact covid info ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-5
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
*
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-5\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n*",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-5\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n*",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
This model is a further pre-trained version of [vinai/bertweet-covid19-base-uncased](https://huggingface.co/vinai/bertweet-covid19-base-uncased) on masked language modeling using [a kaggle dataset](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets) with tweets up until early December.
It achieves the following results on the evaluation set (15% from the dataset randomly selected to serve as a test set):
- Loss: 1.5089
- Perplexity: 4.64
To use the model, use the inference API.
Alternatively, to run locally
```
from transformers import pipeline
model = "justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets"
pipe = pipeline("fill-mask", model = model)
seq = "covid vaccines are <mask> and effective"
pipe(seq)
```
## Model description
This model is a further pretrained version of bertweet, which both follow objectives in the [RoBERTa paper](https://arxiv.org/pdf/1907.11692.pdf). While bertweet was only trained with 23M tweets until September, 2020, this model was further pre-trained using 300k tweets with #CovidVaccine.
The tokenizer requires the emoji library to be installed.
```
!pip install nltk emoji
```
## Intended uses & limitations
The intended use of this model is for fine-tuning on a downstream task on tasks that are closely related to covid and covid vaccines. This model has many potential biases and limitations, since the model is trained on public tweets, it is bound to recreate biases that people tweet.
In order to load the model and tokenizer, run
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets")
model = AutoModelForMaskedLM.from_pretrained("justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets")
```
## Training and evaluation data
This model was further pre-trained on 300k tweets containing #covidvaccines from this [kaggle dataset](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets). The evaluation set was 15% of the tweets that were held out from the training data.
## Training procedure
See the training notebook found [here]().
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5775 | 1.0 | 8931 | 1.5852 |
| 1.5715 | 2.0 | 17862 | 1.5701 |
| 1.5394 | 3.0 | 26793 | 1.5089 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets", "results": []}]}
|
justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
==============================================================
This model is a further pre-trained version of vinai/bertweet-covid19-base-uncased on masked language modeling using a kaggle dataset with tweets up until early December.
It achieves the following results on the evaluation set (15% from the dataset randomly selected to serve as a test set):
* Loss: 1.5089
* Perplexity: 4.64
To use the model, use the inference API.
Alternatively, to run locally
Model description
-----------------
This model is a further pretrained version of bertweet, which both follow objectives in the RoBERTa paper. While bertweet was only trained with 23M tweets until September, 2020, this model was further pre-trained using 300k tweets with #CovidVaccine.
The tokenizer requires the emoji library to be installed.
Intended uses & limitations
---------------------------
The intended use of this model is for fine-tuning on a downstream task on tasks that are closely related to covid and covid vaccines. This model has many potential biases and limitations, since the model is trained on public tweets, it is bound to recreate biases that people tweet.
In order to load the model and tokenizer, run
Training and evaluation data
----------------------------
This model was further pre-trained on 300k tweets containing #covidvaccines from this kaggle dataset. The evaluation set was 15% of the tweets that were held out from the training data.
Training procedure
------------------
See the training notebook found here.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["jwuthri/autonlp-data-shipping_status_2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 32.912881644048}
|
jwuthri/autonlp-shipping_status_2-27366103
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:jwuthri/autonlp-data-shipping_status_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #unk #dataset-jwuthri/autonlp-data-shipping_status_2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 27366103\n- CO2 Emissions (in grams): 32.912881644048",
"## Validation Metrics\n\n- Loss: 0.18175844848155975\n- Accuracy: 0.9437683592110785\n- Precision: 0.9416809605488851\n- Recall: 0.8459167950693375\n- AUC: 0.9815242330050846\n- F1: 0.8912337662337663",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #unk #dataset-jwuthri/autonlp-data-shipping_status_2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 27366103\n- CO2 Emissions (in grams): 32.912881644048",
"## Validation Metrics\n\n- Loss: 0.18175844848155975\n- Accuracy: 0.9437683592110785\n- Precision: 0.9416809605488851\n- Recall: 0.8459167950693375\n- AUC: 0.9815242330050846\n- F1: 0.8912337662337663",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-j-run
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9189
- Mae: 0.4634
## Model description
Trained following the MLT Tokyo Transformers workshop run by huggingface.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 |
| 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en-j-run", "results": []}]}
|
jx88/xlm-roberta-base-finetuned-marc-en-j-run
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-marc-en-j-run
========================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9189
* Mae: 0.4634
Model description
-----------------
Trained following the MLT Tokyo Transformers workshop run by huggingface.
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4716
- Matthews Correlation: 0.5579
## Model description
More information needed
## Intended uses & limitations
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("jxuhf/roberta-base-finetuned-cola")
```
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4981 | 1.0 | 535 | 0.5162 | 0.5081 |
| 0.314 | 2.0 | 1070 | 0.4716 | 0.5579 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "roberta-base-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.557882735147727}}]}]}
|
jxuhf/roberta-base-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-glue #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-finetuned-cola
===========================
This model is a fine-tuned version of roberta-base on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4716
* Matthews Correlation: 0.5579
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.0
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-glue #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
Labels Twitter biographies on [Openness](https://en.wikipedia.org/wiki/Openness_to_experience), strongly related to intellectual curiosity.
Intuitive: Associated with higher intellectual curiosity
Sensing: Associated with lower intellectual curiosity
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
{}
|
k-partha/curiosity_bert_bio
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.06402"
] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us
|
Labels Twitter biographies on Openness, strongly related to intellectual curiosity.
Intuitive: Associated with higher intellectual curiosity
Sensing: Associated with lower intellectual curiosity
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the paper.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to [agreeableness.](https://en.wikipedia.org/wiki/Agreeableness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.)
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
{}
|
k-partha/decision_bert_bio
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.06402"
] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us
|
Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to agreeableness.
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.)
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the paper.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to [conscientiousness](https://en.wikipedia.org/wiki/Conscientiousness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label.
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
{}
|
k-partha/decision_style_bert_bio
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.06402"
] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us
|
Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to conscientiousness
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label.
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the paper.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
Classifies Twitter biographies as either introverts or extroverts.
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
|
{}
|
k-partha/extrabert_bio
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.06402"
] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us
|
Classifies Twitter biographies as either introverts or extroverts.
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the paper.
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2109.06402 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
Копия модели https://huggingface.co/cointegrated/rubert-tiny. Чисто для теста!
|
{"language": ["ru", "en"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm", "tiny"], "widget": [{"text": "\u041c\u0438\u043d\u0438\u0430\u0442\u044e\u0440\u043d\u0430\u044f \u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f [MASK] \u0440\u0430\u0437\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447."}]}
|
k0t1k/test
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"russian",
"fill-mask",
"embeddings",
"masked-lm",
"tiny",
"ru",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru",
"en"
] |
TAGS
#transformers #pytorch #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #ru #en #license-mit #endpoints_compatible #region-us
|
Копия модели URL Чисто для теста!
|
[] |
[
"TAGS\n#transformers #pytorch #bert #pretraining #russian #fill-mask #embeddings #masked-lm #tiny #ru #en #license-mit #endpoints_compatible #region-us \n"
] |
null | null |
>tr|Q8ZR27|Q8ZR27_SALTY Putative glycerol dehydrogenase OS=Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720) OX=99287 GN=ybdH PE=3 SV=1
MNHTEIRVVTGPANYFSHAGSLERLTDFFTPEQLSHAVWVYGERAIAAARPYLPEAFERA
GAKHLPFTGHCSERHVAQLAHACNDDRQVVIGVGGGALLDTAKALARRLALPFVAIPTIA
ATCAAWTPLSVWYNDAGQALQFEIFDDANFLVLVEPRIILQAPDDYLLAGIGDTLAKWYE
AVVLAPQPETLPLTVRLGINSACAIRDLLLDSSEQALADKQQRRLTQAFCDVVDAIIAGG
GMVGGLGERYTRVAAAHAVHNGLTVLPQTEKFLHGTKVAYGILVQSALLGQDDVLAQLIT
AYRRFHLPARLSELDVDIHNTAEIDRVIAHTLRPVESIHYLPVTLTPDTLRAAFEKVEFF
RI
|
{}
|
k948181/ybdH-1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
>tr|Q8ZR27|Q8ZR27_SALTY Putative glycerol dehydrogenase OS=Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720) OX=99287 GN=ybdH PE=3 SV=1
MNHTEIRVVTGPANYFSHAGSLERLTDFFTPEQLSHAVWVYGERAIAAARPYLPEAFERA
GAKHLPFTGHCSERHVAQLAHACNDDRQVVIGVGGGALLDTAKALARRLALPFVAIPTIA
ATCAAWTPLSVWYNDAGQALQFEIFDDANFLVLVEPRIILQAPDDYLLAGIGDTLAKWYE
AVVLAPQPETLPLTVRLGINSACAIRDLLLDSSEQALADKQQRRLTQAFCDVVDAIIAGG
GMVGGLGERYTRVAAAHAVHNGLTVLPQTEKFLHGTKVAYGILVQSALLGQDDVLAQLIT
AYRRFHLPARLSELDVDIHNTAEIDRVIAHTLRPVESIHYLPVTLTPDTLRAAFEKVEFF
RI
|
[] |
[
"TAGS\n#region-us \n"
] |
token-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 557515810
- CO2 Emissions (in grams): 2.96638567287195
## Validation Metrics
- Loss: 0.12897901237010956
- Accuracy: 0.9713212700580403
- Precision: 0.9475614228089475
- Recall: 0.96274217585693
- F1: 0.9550914803178709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kSaluja/autonlp-tele_new_5k-557515810
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["kSaluja/autonlp-data-tele_new_5k"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.96638567287195}
|
kSaluja/autonlp-tele_new_5k-557515810
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"en",
"dataset:kSaluja/autonlp-data-tele_new_5k",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #autonlp #en #dataset-kSaluja/autonlp-data-tele_new_5k #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 557515810
- CO2 Emissions (in grams): 2.96638567287195
## Validation Metrics
- Loss: 0.12897901237010956
- Accuracy: 0.9713212700580403
- Precision: 0.9475614228089475
- Recall: 0.96274217585693
- F1: 0.9550914803178709
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 557515810\n- CO2 Emissions (in grams): 2.96638567287195",
"## Validation Metrics\n\n- Loss: 0.12897901237010956\n- Accuracy: 0.9713212700580403\n- Precision: 0.9475614228089475\n- Recall: 0.96274217585693\n- F1: 0.9550914803178709",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autonlp #en #dataset-kSaluja/autonlp-data-tele_new_5k #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 557515810\n- CO2 Emissions (in grams): 2.96638567287195",
"## Validation Metrics\n\n- Loss: 0.12897901237010956\n- Accuracy: 0.9713212700580403\n- Precision: 0.9475614228089475\n- Recall: 0.96274217585693\n- F1: 0.9550914803178709",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 585716433
- CO2 Emissions (in grams): 2.379476355147211
## Validation Metrics
- Loss: 0.15210922062397003
- Accuracy: 0.9724770642201835
- Precision: 0.950836820083682
- Recall: 0.9625838333921638
- F1: 0.9566742676723382
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kSaluja/autonlp-tele_red_data_model-585716433
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("kSaluja/autonlp-tele_red_data_model-585716433", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kSaluja/autonlp-tele_red_data_model-585716433", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["kSaluja/autonlp-data-tele_red_data_model"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.379476355147211}
|
kSaluja/autonlp-tele_red_data_model-585716433
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"en",
"dataset:kSaluja/autonlp-data-tele_red_data_model",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #autonlp #en #dataset-kSaluja/autonlp-data-tele_red_data_model #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 585716433
- CO2 Emissions (in grams): 2.379476355147211
## Validation Metrics
- Loss: 0.15210922062397003
- Accuracy: 0.9724770642201835
- Precision: 0.950836820083682
- Recall: 0.9625838333921638
- F1: 0.9566742676723382
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 585716433\n- CO2 Emissions (in grams): 2.379476355147211",
"## Validation Metrics\n\n- Loss: 0.15210922062397003\n- Accuracy: 0.9724770642201835\n- Precision: 0.950836820083682\n- Recall: 0.9625838333921638\n- F1: 0.9566742676723382",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #autonlp #en #dataset-kSaluja/autonlp-data-tele_red_data_model #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 585716433\n- CO2 Emissions (in grams): 2.379476355147211",
"## Validation Metrics\n\n- Loss: 0.15210922062397003\n- Accuracy: 0.9724770642201835\n- Precision: 0.950836820083682\n- Recall: 0.9625838333921638\n- F1: 0.9566742676723382",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
#wanda bot go reeeeeeeeeeeeeeeeeeeeee
|
{"tags": ["conversational"]}
|
kaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaot1k/DialoGPT-small-Wanda
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#wanda bot go reeeeeeeeeeeeeeeeeeeeee
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
# Reference extraction in patents
This repository contains a finetuned BERT model that can extract references to scientific literature from patents.
See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information.
|
{}
|
kaesve/BERT_patent_reference_extraction
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:2101.01039",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.01039"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #arxiv-2101.01039 #autotrain_compatible #endpoints_compatible #region-us
|
# Reference extraction in patents
This repository contains a finetuned BERT model that can extract references to scientific literature from patents.
See URL and URL for more information.
|
[
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned BERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-2101.01039 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned BERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
fill-mask
|
transformers
|
# Reference extraction in patents
This repository contains a finetuned BioBERT model that can extract references to scientific literature from patents.
See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information.
|
{}
|
kaesve/BioBERT_patent_reference_extraction
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:2101.01039",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.01039"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #arxiv-2101.01039 #autotrain_compatible #endpoints_compatible #region-us
|
# Reference extraction in patents
This repository contains a finetuned BioBERT model that can extract references to scientific literature from patents.
See URL and URL for more information.
|
[
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned BioBERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-2101.01039 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned BioBERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
null |
transformers
|
# Reference extraction in patents
This repository contains a finetuned SciBERT model that can extract references to scientific literature from patents.
See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information.
|
{}
|
kaesve/SciBERT_patent_reference_extraction
| null |
[
"transformers",
"pytorch",
"arxiv:2101.01039",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2101.01039"
] |
[] |
TAGS
#transformers #pytorch #arxiv-2101.01039 #endpoints_compatible #region-us
|
# Reference extraction in patents
This repository contains a finetuned SciBERT model that can extract references to scientific literature from patents.
See URL and URL for more information.
|
[
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned SciBERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
[
"TAGS\n#transformers #pytorch #arxiv-2101.01039 #endpoints_compatible #region-us \n",
"# Reference extraction in patents\r\n\r\nThis repository contains a finetuned SciBERT model that can extract references to scientific literature from patents.\r\n\r\nSee URL and URL for more information."
] |
text-generation
|
transformers
|
#Radion DialoGPT Model
|
{"tags": ["conversational"]}
|
kagennotsuki/DialoGPT-medium-radion
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Radion DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2291 | 1.0 | 5533 | 1.1581 |
| 0.9553 | 2.0 | 11066 | 1.1249 |
| 0.7767 | 3.0 | 16599 | 1.1639 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
kaggleodin/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1639
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
Welcome! This is the model built for the sentiment analysis on the STEM course reviews at UCLA.
- Author: Kaixin Wang
- Email: kaixinwang@g.ucla.edu
- Time Updated: March 2022
|
{"language": ["Python"], "tags": ["sentiment analysis", "STEM", "text classification"], "thumbnail": "url to a thumbnail used in social sharing"}
|
kaixinwang/NLP
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"sentiment analysis",
"STEM",
"text classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Python"
] |
TAGS
#transformers #tf #distilbert #text-classification #sentiment analysis #STEM #text classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Welcome! This is the model built for the sentiment analysis on the STEM course reviews at UCLA.
- Author: Kaixin Wang
- Email: kaixinwang@g.URL
- Time Updated: March 2022
|
[] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #sentiment analysis #STEM #text classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null | null |
# KakaoBrain project KoGPT
KakaoBrain's Pre-Trained Language Models.
* KakaoBrain project KoGPT (Korean Generative Pre-trained Transformer)
* [https://github.com/kakaobrain/kogpt](https://github.com/kakaobrain/kogpt)
* [https://huggingface.co/kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt)
## Model Descriptions
### KoGPT6B-ryan1.5b
* [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b)
* [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b-float16\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b-float16)
| Hyperparameter | Value |
|:---------------------|--------------:|
| \\(n_{parameters}\\) | 6,166,502,400 |
| \\(n_{layers}\\) | 28 |
| \\(d_{model}\\) | 4,096 |
| \\(d_{ff}\\) | 16,384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2,048 |
| \\(n_{vocab}\\) | 64,512 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | 64 |
## Hardware requirements
### KoGPT6B-ryan1.5b
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* `32GB GPU RAM` in the required minimum memory size
### KoGPT6B-ryan1.5b-float16
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere
* `16GB GPU RAM` in the required minimum memory size
## Usage
### prompt
```bash
python -m kogpt --help
usage: KoGPT inference [-h] [--model MODEL] [--revision {KoGPT6B-ryan1.5b}]
[--device {cpu,cuda}] [-d]
KakaoBrain Korean(hangul) Generative Pre-Training Model
optional arguments:
-h, --help show this help message and exit
--model MODEL huggingface repo (default:kakaobrain/kogpt)
--revision {KoGPT6B-ryan1.5b}
--device {cpu,cuda} (default:cuda)
-d, --debug
```
```bash
python -m kogpt
prompt> 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던
temperature(0.8)>
max_length(128)> 64
인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상
prompt>
...
```
### python
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b
bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]'
)
model = AutoModelForCausalLM.from_pretrained(
'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b
pad_token_id=tokenizer.eos_token_id,
torch_dtype='auto', low_cpu_mem_usage=True
).to(device='cuda', non_blocking=True)
_ = model.eval()
prompt = '인간처럼 생각하고, 행동하는 \'지능\'을 통해 인류가 이제까지 풀지 못했던'
with torch.no_grad():
tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cuda', non_blocking=True)
gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated) # print: 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상
```
## Experiments
### In-context Few-Shots
| Models | #params | NSMC (Acc.) | YNAT (F1) | KLUE-STS (F1) |
|:--------------|--------:|------------:|----------:|--------------:|
| HyperCLOVA[1] | 1.3B | 83.9 | 58.7 | 60.9 |
| HyperCLOVA[1] | 6.9B | 83.8 | 67.5 | 59.3 |
| HyperCLOVA[1] | 13.0B | 87.9 | 67.9 | 60.0 |
| HyperCLOVA[1] | 39.0B | 88.0 | 71.4 | 61.6 |
| HyperCLOVA[1] | 82.0B | **88.2** | 72.7 | **65.1** |
| **Ours** | 6.0B | 87.8 | **78.0** | 64.3 |
### Finetuning / P-Tuning
We have been reported to have issues(https://github.com/kakaobrain/kogpt/issues/17) with our downstream evaluation.
The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed.
You can refer to the above issue link for the existing performance evaluation table and troubleshooting results.
## Limitations
KakaoBrain `KoGPT` was trained on `ryan dataset`, a dataset known to contain profanity, lewd, political changed, and other harsh language.
Therefore, `KoGPT` can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how `KoGPT` will response to particular prompts and offensive content without warning.
Primarily Korean: `KoGPT` is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts.
`KoGPT` by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data.
[comment]: <> (If abnormal or socially unacceptable text is generated during testing, please send a "prompt" and the "generated text" to [kogpt-report@kakaobrain.com](mailto:kogpt-report@kakaobrain.com). )
카카오브레인 `KoGPT`는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 `ryan dataset`으로 학습하였습니다.
따라서 `KoGPT`는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다.
`KoGPT`는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다.
기본적으로 `KoGPT`는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다.
[comment]: <> (테스트중에 발생한 비정상적인 혹은 사회적으로 용인되지 않는 텍스트가 생성된 경우 [kogpt-report@kakaobrain.com](mailto:kogpt-report@kakaobrain.com)로 "prompt"와 "생성된 문장"을 함께 보내주시기 바랍니다.)
## Citation
If you apply this library or model to any project and research, please cite our code:
```
@misc{kakaobrain2021kogpt,
title = {KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer},
author = {Ildoo Kim and Gunsoo Han and Jiyeon Ham and Woonhyuk Baek},
year = {2021},
howpublished = {\url{https://github.com/kakaobrain/kogpt}},
}
```
## Contact
This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
[contact@kakaobrain.com](mailto:contact@kakaobrain.com)
## License
The `source code` of KakaoBrain `KoGPT` are licensed under [Apache 2.0](LICENSE.apache-2.0) License.
The `pretrained wieghts` of KakaoBrain `KoGPT` are licensed under [CC-BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/) License.
카카오브레인 `KoGPT`의 `소스코드(source code)`는 [Apache 2.0](LICENSE.apache-2.0) 라이선스 하에 공개되어 있습니다.
카카오브레인 `KoGPT`의 `사전학습된 가중치(pretrained weights)`는 [CC-BY-NC-ND 4.0 라이선스](https://creativecommons.org/licenses/by-nc-nd/4.0/) 라이선스 하에 공개되어 있습니다.
모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 [Apache 2.0](LICENSE.apache-2.0), [LICENSE.cc-by-nc-nd-4.0](LICENSE.cc-by-nc-nd-4.0) 파일에서 확인하실 수 있습니다.
## References
[1] [HyperCLOVA](https://arxiv.org/abs/2109.04650): Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
|
{"language": "ko", "license": "cc-by-nc-nd-4.0", "tags": ["KakaoBrain", "KoGPT", "GPT", "GPT3"]}
|
kakaobrain/kogpt
| null |
[
"KakaoBrain",
"KoGPT",
"GPT",
"GPT3",
"ko",
"arxiv:2104.09864",
"arxiv:2109.04650",
"license:cc-by-nc-nd-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864",
"2109.04650"
] |
[
"ko"
] |
TAGS
#KakaoBrain #KoGPT #GPT #GPT3 #ko #arxiv-2104.09864 #arxiv-2109.04650 #license-cc-by-nc-nd-4.0 #has_space #region-us
|
KakaoBrain project KoGPT
========================
KakaoBrain's Pre-Trained Language Models.
* KakaoBrain project KoGPT (Korean Generative Pre-trained Transformer)
+ URL
+ URL
Model Descriptions
------------------
### KoGPT6B-ryan1.5b
* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b]](URL
* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b-float16]](URL
Hardware requirements
---------------------
### KoGPT6B-ryan1.5b
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* '32GB GPU RAM' in the required minimum memory size
### KoGPT6B-ryan1.5b-float16
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere
* '16GB GPU RAM' in the required minimum memory size
Usage
-----
### prompt
### python
Experiments
-----------
### In-context Few-Shots
### Finetuning / P-Tuning
We have been reported to have issues(URL with our downstream evaluation.
The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed.
You can refer to the above issue link for the existing performance evaluation table and troubleshooting results.
Limitations
-----------
KakaoBrain 'KoGPT' was trained on 'ryan dataset', a dataset known to contain profanity, lewd, political changed, and other harsh language.
Therefore, 'KoGPT' can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how 'KoGPT' will response to particular prompts and offensive content without warning.
Primarily Korean: 'KoGPT' is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts.
'KoGPT' by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data.
카카오브레인 'KoGPT'는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 'ryan dataset'으로 학습하였습니다.
따라서 'KoGPT'는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다.
'KoGPT'는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다.
기본적으로 'KoGPT'는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다.
If you apply this library or model to any project and research, please cite our code:
Contact
-------
This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
contact@URL
License
-------
The 'source code' of KakaoBrain 'KoGPT' are licensed under Apache 2.0 License.
The 'pretrained wieghts' of KakaoBrain 'KoGPT' are licensed under CC-BY-NC-ND 4.0 License License.
카카오브레인 'KoGPT'의 '소스코드(source code)'는 Apache 2.0 라이선스 하에 공개되어 있습니다.
카카오브레인 'KoGPT'의 '사전학습된 가중치(pretrained weights)'는 CC-BY-NC-ND 4.0 라이선스 라이선스 하에 공개되어 있습니다.
모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 Apache 2.0, URL-by-nc-nd-4.0 파일에서 확인하실 수 있습니다.
References
----------
[1] HyperCLOVA: Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
|
[
"### KoGPT6B-ryan1.5b\n\n\n* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b]](URL\n* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b-float16]](URL\n\n\n\nHardware requirements\n---------------------",
"### KoGPT6B-ryan1.5b",
"#### GPU\n\n\nThe following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.\n\n\n* '32GB GPU RAM' in the required minimum memory size",
"### KoGPT6B-ryan1.5b-float16",
"#### GPU\n\n\nThe following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.\n\n\n* half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere\n* '16GB GPU RAM' in the required minimum memory size\n\n\nUsage\n-----",
"### prompt",
"### python\n\n\nExperiments\n-----------",
"### In-context Few-Shots",
"### Finetuning / P-Tuning\n\n\nWe have been reported to have issues(URL with our downstream evaluation.\n\n\nThe previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed.\n\n\nYou can refer to the above issue link for the existing performance evaluation table and troubleshooting results.\n\n\nLimitations\n-----------\n\n\nKakaoBrain 'KoGPT' was trained on 'ryan dataset', a dataset known to contain profanity, lewd, political changed, and other harsh language.\nTherefore, 'KoGPT' can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how 'KoGPT' will response to particular prompts and offensive content without warning.\n\n\nPrimarily Korean: 'KoGPT' is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts.\n'KoGPT' by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data.\n\n\n카카오브레인 'KoGPT'는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 'ryan dataset'으로 학습하였습니다.\n따라서 'KoGPT'는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다.\n\n\n'KoGPT'는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다.\n기본적으로 'KoGPT'는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다.\n\n\nIf you apply this library or model to any project and research, please cite our code:\n\n\nContact\n-------\n\n\nThis is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.\n\n\ncontact@URL\n\n\nLicense\n-------\n\n\nThe 'source code' of KakaoBrain 'KoGPT' are licensed under Apache 2.0 License. \n\nThe 'pretrained wieghts' of KakaoBrain 'KoGPT' are licensed under CC-BY-NC-ND 4.0 License License.\n\n\n카카오브레인 'KoGPT'의 '소스코드(source code)'는 Apache 2.0 라이선스 하에 공개되어 있습니다. \n\n카카오브레인 'KoGPT'의 '사전학습된 가중치(pretrained weights)'는 CC-BY-NC-ND 4.0 라이선스 라이선스 하에 공개되어 있습니다. \n\n모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 Apache 2.0, URL-by-nc-nd-4.0 파일에서 확인하실 수 있습니다.\n\n\nReferences\n----------\n\n\n[1] HyperCLOVA: Kim, Boseop, et al. \"What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers.\" arXiv preprint arXiv:2109.04650 (2021)."
] |
[
"TAGS\n#KakaoBrain #KoGPT #GPT #GPT3 #ko #arxiv-2104.09864 #arxiv-2109.04650 #license-cc-by-nc-nd-4.0 #has_space #region-us \n",
"### KoGPT6B-ryan1.5b\n\n\n* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b]](URL\n* [[huggingface][kakaobrain/kogpt][KoGPT6B-ryan1.5b-float16]](URL\n\n\n\nHardware requirements\n---------------------",
"### KoGPT6B-ryan1.5b",
"#### GPU\n\n\nThe following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.\n\n\n* '32GB GPU RAM' in the required minimum memory size",
"### KoGPT6B-ryan1.5b-float16",
"#### GPU\n\n\nThe following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.\n\n\n* half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere\n* '16GB GPU RAM' in the required minimum memory size\n\n\nUsage\n-----",
"### prompt",
"### python\n\n\nExperiments\n-----------",
"### In-context Few-Shots",
"### Finetuning / P-Tuning\n\n\nWe have been reported to have issues(URL with our downstream evaluation.\n\n\nThe previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed.\n\n\nYou can refer to the above issue link for the existing performance evaluation table and troubleshooting results.\n\n\nLimitations\n-----------\n\n\nKakaoBrain 'KoGPT' was trained on 'ryan dataset', a dataset known to contain profanity, lewd, political changed, and other harsh language.\nTherefore, 'KoGPT' can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how 'KoGPT' will response to particular prompts and offensive content without warning.\n\n\nPrimarily Korean: 'KoGPT' is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts.\n'KoGPT' by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data.\n\n\n카카오브레인 'KoGPT'는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 'ryan dataset'으로 학습하였습니다.\n따라서 'KoGPT'는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다.\n\n\n'KoGPT'는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다.\n기본적으로 'KoGPT'는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다.\n\n\nIf you apply this library or model to any project and research, please cite our code:\n\n\nContact\n-------\n\n\nThis is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.\n\n\ncontact@URL\n\n\nLicense\n-------\n\n\nThe 'source code' of KakaoBrain 'KoGPT' are licensed under Apache 2.0 License. \n\nThe 'pretrained wieghts' of KakaoBrain 'KoGPT' are licensed under CC-BY-NC-ND 4.0 License License.\n\n\n카카오브레인 'KoGPT'의 '소스코드(source code)'는 Apache 2.0 라이선스 하에 공개되어 있습니다. \n\n카카오브레인 'KoGPT'의 '사전학습된 가중치(pretrained weights)'는 CC-BY-NC-ND 4.0 라이선스 라이선스 하에 공개되어 있습니다. \n\n모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 Apache 2.0, URL-by-nc-nd-4.0 파일에서 확인하실 수 있습니다.\n\n\nReferences\n----------\n\n\n[1] HyperCLOVA: Kim, Boseop, et al. \"What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers.\" arXiv preprint arXiv:2109.04650 (2021)."
] |
token-classification
|
transformers
|
BioELECTRA-PICO
Cite our paper using below citation
```
@inproceedings{kanakarajan-etal-2021-bioelectra,
title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.bionlp-1.16",
doi = "10.18653/v1/2021.bionlp-1.16",
pages = "143--154",
abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.",
}
```
|
{"widget": [{"text": "Those in the aspirin group experienced reduced duration of headache compared to those in the placebo arm (P<0.05)"}]}
|
kamalkraj/BioELECTRA-PICO
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #electra #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BioELECTRA-PICO
Cite our paper using below citation
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
```
|
{}
|
kamalkraj/bioelectra-base-discriminator-pubmed-pmc-lt
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.
## How to use the discriminator in 'transformers'
|
[
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.",
"## How to use the discriminator in 'transformers'"
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n",
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.",
"## How to use the discriminator in 'transformers'"
] |
null |
transformers
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
```
|
{}
|
kamalkraj/bioelectra-base-discriminator-pubmed-pmc
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.
## How to use the discriminator in 'transformers'
|
[
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.",
"## How to use the discriminator in 'transformers'"
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n",
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.",
"## How to use the discriminator in 'transformers'"
] |
null |
transformers
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
Cite our paper using below citation
```
@inproceedings{kanakarajan-etal-2021-bioelectra,
title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.bionlp-1.16",
doi = "10.18653/v1/2021.bionlp-1.16",
pages = "143--154",
abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.",
}
```
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
```
|
{}
|
kamalkraj/bioelectra-base-discriminator-pubmed
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.
Cite our paper using below citation
## How to use the discriminator in 'transformers'
|
[
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.\n\nCite our paper using below citation",
"## How to use the discriminator in 'transformers'"
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #endpoints_compatible #region-us \n",
"## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators\n\nRecent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.\n\nFor a detailed description and experimental results, please refer to our paper BioELECTRA:Pretrained Biomedical text Encoder using Discriminators.\n\nCite our paper using below citation",
"## How to use the discriminator in 'transformers'"
] |
feature-extraction
|
transformers
|
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m |
|-------------------|-----------|-----------|--------|
| RoBERTa-base | 91.5/84.6 | 83.7/80.5 | 87.6 |
| XLNet-Large | -/- | -/80.2 | 86.8 |
| **DeBERTa-base** | 93.1/87.2 | 86.2/83.1 | 88.8 |
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
{"language": "en", "license": "mit", "tags": "deberta-v1", "thumbnail": "https://huggingface.co/front/thumbnails/microsoft.png"}
|
kamalkraj/deberta-base
| null |
[
"transformers",
"tf",
"deberta",
"feature-extraction",
"deberta-v1",
"en",
"arxiv:2006.03654",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03654"
] |
[
"en"
] |
TAGS
#transformers #tf #deberta #feature-extraction #deberta-v1 #en #arxiv-2006.03654 #license-mit #endpoints_compatible #region-us
|
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
-----------------------------------------------------------
DeBERTa improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the official repository for more details and updates.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
If you find DeBERTa useful for your work, please cite the following paper:
|
[
"#### Fine-tuning on NLU tasks\n\n\nWe present the dev results on SQuAD 1.1/2.0 and MNLI tasks.\n\n\n\nIf you find DeBERTa useful for your work, please cite the following paper:"
] |
[
"TAGS\n#transformers #tf #deberta #feature-extraction #deberta-v1 #en #arxiv-2006.03654 #license-mit #endpoints_compatible #region-us \n",
"#### Fine-tuning on NLU tasks\n\n\nWe present the dev results on SQuAD 1.1/2.0 and MNLI tasks.\n\n\n\nIf you find DeBERTa useful for your work, please cite the following paper:"
] |
feature-extraction
|
transformers
|
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
{"language": "en", "license": "mit", "tags": "deberta", "thumbnail": "https://huggingface.co/front/thumbnails/microsoft.png"}
|
kamalkraj/deberta-v2-xlarge
| null |
[
"transformers",
"tf",
"deberta-v2",
"feature-extraction",
"deberta",
"en",
"arxiv:2006.03654",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.03654"
] |
[
"en"
] |
TAGS
#transformers #tf #deberta-v2 #feature-extraction #deberta #en #arxiv-2006.03654 #license-mit #endpoints_compatible #region-us
|
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
-----------------------------------------------------------
DeBERTa improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the official repository for more details and updates.
This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
---
#### Notes.
* 1 Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on DeBERTa-Large-MNLI, DeBERTa-XLarge-MNLI, DeBERTa-V2-XLarge-MNLI, DeBERTa-V2-XXLarge-MNLI. The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
* 2 To try the XXLarge model with HF transformers, you need to specify --sharded\_ddp
If you find DeBERTa useful for your work, please cite the following paper:
|
[
"### Fine-tuning on NLU tasks\n\n\nWe present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.\n\n\n\n\n\n---",
"#### Notes.\n\n\n* 1 Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on DeBERTa-Large-MNLI, DeBERTa-XLarge-MNLI, DeBERTa-V2-XLarge-MNLI, DeBERTa-V2-XXLarge-MNLI. The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.\n* 2 To try the XXLarge model with HF transformers, you need to specify --sharded\\_ddp\n\n\nIf you find DeBERTa useful for your work, please cite the following paper:"
] |
[
"TAGS\n#transformers #tf #deberta-v2 #feature-extraction #deberta #en #arxiv-2006.03654 #license-mit #endpoints_compatible #region-us \n",
"### Fine-tuning on NLU tasks\n\n\nWe present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.\n\n\n\n\n\n---",
"#### Notes.\n\n\n* 1 Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on DeBERTa-Large-MNLI, DeBERTa-XLarge-MNLI, DeBERTa-V2-XLarge-MNLI, DeBERTa-V2-XXLarge-MNLI. The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.\n* 2 To try the XXLarge model with HF transformers, you need to specify --sharded\\_ddp\n\n\nIf you find DeBERTa useful for your work, please cite the following paper:"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 0.5793 |
| No log | 2.0 | 2 | 0.1730 |
| No log | 3.0 | 3 | 0.1042 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
kamilali/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of bert-large-uncased-whole-word-masking-finetuned-squad on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1042
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.17.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 208681
## Validation Metrics
- Loss: 0.37569838762283325
- Accuracy: 0.8365019011406845
- Precision: 0.8398058252427184
- Recall: 0.9453551912568307
- AUC: 0.9048838797814208
- F1: 0.8894601542416453
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-cola_gram-208681
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-cola_gram-208681", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["kamivao/autonlp-data-cola_gram"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
kamivao/autonlp-cola_gram-208681
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:kamivao/autonlp-data-cola_gram",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-kamivao/autonlp-data-cola_gram #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 208681
## Validation Metrics
- Loss: 0.37569838762283325
- Accuracy: 0.8365019011406845
- Precision: 0.8398058252427184
- Recall: 0.9453551912568307
- AUC: 0.9048838797814208
- F1: 0.8894601542416453
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 208681",
"## Validation Metrics\n\n- Loss: 0.37569838762283325\n- Accuracy: 0.8365019011406845\n- Precision: 0.8398058252427184\n- Recall: 0.9453551912568307\n- AUC: 0.9048838797814208\n- F1: 0.8894601542416453",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-kamivao/autonlp-data-cola_gram #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 208681",
"## Validation Metrics\n\n- Loss: 0.37569838762283325\n- Accuracy: 0.8365019011406845\n- Precision: 0.8398058252427184\n- Recall: 0.9453551912568307\n- AUC: 0.9048838797814208\n- F1: 0.8894601542416453",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 5771228
## Validation Metrics
- Loss: 0.17127291858196259
- Accuracy: 0.9206671174216813
- Precision: 0.9588885738588036
- Recall: 0.9423237670660352
- AUC: 0.9720189638675828
- F1: 0.9505340078695896
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kamivao/autonlp-entity_selection-5771228
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kamivao/autonlp-entity_selection-5771228", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["kamivao/autonlp-data-entity_selection"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
kamivao/autonlp-entity_selection-5771228
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:kamivao/autonlp-data-entity_selection",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-kamivao/autonlp-data-entity_selection #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 5771228
## Validation Metrics
- Loss: 0.17127291858196259
- Accuracy: 0.9206671174216813
- Precision: 0.9588885738588036
- Recall: 0.9423237670660352
- AUC: 0.9720189638675828
- F1: 0.9505340078695896
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 5771228",
"## Validation Metrics\n\n- Loss: 0.17127291858196259\n- Accuracy: 0.9206671174216813\n- Precision: 0.9588885738588036\n- Recall: 0.9423237670660352\n- AUC: 0.9720189638675828\n- F1: 0.9505340078695896",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-kamivao/autonlp-data-entity_selection #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 5771228",
"## Validation Metrics\n\n- Loss: 0.17127291858196259\n- Accuracy: 0.9206671174216813\n- Precision: 0.9588885738588036\n- Recall: 0.9423237670660352\n- AUC: 0.9720189638675828\n- F1: 0.9505340078695896",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
learning rate: 5e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
trained on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/cb
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 5e-5
training epochs: 5
batch size: 8
seed: 42
model: bert-base-uncased
trained on CB which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 42
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-1
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 42
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
learning rate: 3e-5
training epochs: 3
batch size: 64
seed: 0
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-2
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 3e-5
training epochs: 3
batch size: 64
seed: 0
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 13
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-3
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 13
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 87
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-4
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 87
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 111
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
{}
|
kangnichaluo/mnli-5
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
learning rate: 2e-5
training epochs: 3
batch size: 64
seed: 111
model: bert-base-uncased
trained on MNLI which is converted into two-way nli classification (predict entailment or not-entailment class)
|
[] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.