modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 00:39:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 00:38:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AdShenoy/Bart_summarizer
|
AdShenoy
| 2022-08-10T06:34:37Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-08-10T06:34:09Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
wannaphong/wav2vec2-large-xlsr-53-th-cv8-deepcut
|
wannaphong
| 2022-08-10T05:40:50Z | 12 | 5 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:common_voice",
"arxiv:2208.04799",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-07T08:11:41Z |
---
language:
- th
tags:
- automatic-speech-recognition
license: apache-2.0
datasets:
- common_voice
metrics:
- wer
- cer
---
# Thai Wav2Vec2 with CommonVoice V8 (deepcut tokenizer) + language model
This model trained with CommonVoice V8 dataset by increase data from CommonVoice V7 dataset that It was use in [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th). It was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53).
## Model description
- Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799)
## Datasets
It is increase new data from The Common Voice V8 dataset to Common Voice V7 dataset or remove all data in Common Voice V7 dataset before split Common Voice V8 then add CommonVoice V7 dataset back to dataset.
It use [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) script for split Common Voice dataset.
## Models
This model was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model with Thai Common Voice V8 dataset and It use pre-tokenize with deepcut.tokenize.
## Evaluation
**Test with CommonVoice V8 Testset**
| Model | WER by newmm (%) | WER by deepcut (%) | CER |
|-----------------------|------------------|--------------------|----------|
| AIResearch.in.th and PyThaiNLP | 17.414503 | 11.923089 | 3.854153 |
| wav2vec2 with deepcut | 16.354521 | 11.424476 | 3.684060 |
| wav2vec2 with newmm | 16.698299 | 11.436941 | 3.737407 |
| **wav2vec2 with deepcut + language model** | 12.630260 | 9.613886 | 3.292073 |
| wav2vec2 with newmm + language model | 12.583706 | 9.598305 | 3.276610 |
**Test with CommonVoice V7 Testset (same test by CV V7)**
| Model | WER by newmm (%) | WER by deepcut (%) | CER |
|-----------------------|------------------|--------------------|----------|
| AIResearch.in.th and PyThaiNLP | 13.936698 | 9.347462 | 2.804787 |
| wav2vec2 with deepcut | 12.776381 | 8.773006 | 2.628882 |
| wav2vec2 with newmm | 12.750596 | 8.672616 | 2.623341 |
| **wav2vec2 with deepcut + language model** | 9.940050 | 7.423313 | 2.344940 |
| wav2vec2 with newmm + language model | 9.559724 | 7.339654 | 2.277071 |
This is use same testset from [https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
**Links:**
- GitHub Dataset: [https://github.com/wannaphong/thai_commonvoice_dataset](https://github.com/wannaphong/thai_commonvoice_dataset)
- Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799)
## BibTeX entry and citation info
```
@misc{phatthiyaphaibun2022thai,
title={Thai Wav2Vec2.0 with CommonVoice V8},
author={Wannaphong Phatthiyaphaibun and Chompakorn Chaksangchaichot and Peerat Limkonchotiwat and Ekapol Chuangsuwanich and Sarana Nutanong},
year={2022},
eprint={2208.04799},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wannaphong/wav2vec2-large-xlsr-53-th-cv8-newmm
|
wannaphong
| 2022-08-10T05:40:25Z | 11,178 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"th",
"dataset:common_voice",
"arxiv:2208.04799",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-06T09:01:59Z |
---
language:
- th
tags:
- automatic-speech-recognition
license: apache-2.0
datasets:
- common_voice
metrics:
- wer
- cer
---
# Thai Wav2Vec2 with CommonVoice V8 (newmm tokenizer) + language model
This model trained with CommonVoice V8 dataset by increase data from CommonVoice V7 dataset that It was use in [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th). It was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53).
## Model description
- Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799)
## Datasets
It is increase new data from The Common Voice V8 dataset to Common Voice V7 dataset or remove all data in Common Voice V7 dataset before split Common Voice V8 then add CommonVoice V7 dataset back to dataset.
It use [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) script for split Common Voice dataset.
## Models
This model was finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model with Thai Common Voice V8 dataset and It use pre-tokenize with `pythainlp.tokenize.word_tokenize`.
## Training
I used many code from [vistec-AI/wav2vec2-large-xlsr-53-th](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th) and I fixed bug training code in [vistec-AI/wav2vec2-large-xlsr-53-th#2](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th/pull/2)
## Evaluation
**Test with CommonVoice V8 Testset**
| Model | WER by newmm (%) | WER by deepcut (%) | CER |
|-----------------------|------------------|--------------------|----------|
| AIResearch.in.th and PyThaiNLP | 17.414503 | 11.923089 | 3.854153 |
| wav2vec2 with deepcut | 16.354521 | 11.424476 | 3.684060 |
| wav2vec2 with newmm | 16.698299 | 11.436941 | 3.737407 |
| wav2vec2 with deepcut + language model | 12.630260 | 9.613886 | 3.292073 |
| **wav2vec2 with newmm + language model** | 12.583706 | 9.598305 | 3.276610 |
**Test with CommonVoice V7 Testset (same test by CV V7)**
| Model | WER by newmm (%) | WER by deepcut (%) | CER |
|-----------------------|------------------|--------------------|----------|
| AIResearch.in.th and PyThaiNLP | 13.936698 | 9.347462 | 2.804787 |
| wav2vec2 with deepcut | 12.776381 | 8.773006 | 2.628882 |
| wav2vec2 with newmm | 12.750596 | 8.672616 | 2.623341 |
| wav2vec2 with deepcut + language model | 9.940050 | 7.423313 | 2.344940 |
| **wav2vec2 with newmm + language model** | 9.559724 | 7.339654 | 2.277071 |
This is use same testset from [https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
**Links:**
- GitHub Dataset: [https://github.com/wannaphong/thai_commonvoice_dataset](https://github.com/wannaphong/thai_commonvoice_dataset)
- Technical report: [Thai Wav2Vec2.0 with CommonVoice V8](https://arxiv.org/abs/2208.04799)
## BibTeX entry and citation info
```
@misc{phatthiyaphaibun2022thai,
title={Thai Wav2Vec2.0 with CommonVoice V8},
author={Wannaphong Phatthiyaphaibun and Chompakorn Chaksangchaichot and Peerat Limkonchotiwat and Ekapol Chuangsuwanich and Sarana Nutanong},
year={2022},
eprint={2208.04799},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
yokoe/xlm-roberta-base-finetuned-panx-de-fr
|
yokoe
| 2022-08-10T05:27:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-10T05:00:36Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jaybeeja/dqn-SpaceInvadersNoFrameskip-v4
|
jaybeeja
| 2022-08-10T05:15:18Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-10T05:14:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 666.50 +/- 129.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jaybeeja -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jaybeeja
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
domenicrosati/deberta-v3-large-finetuned-syndag-multiclass-not-gpt2-arxiv
|
domenicrosati
| 2022-08-10T04:59:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T22:13:21Z |
---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-syndag-multiclass-not-gpt2-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-syndag-multiclass-not-gpt2-arxiv
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0272
- F1: 0.9941
- Precision: 0.9941
- Recall: 0.9941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0213 | 1.0 | 10853 | 0.0309 | 0.9945 | 0.9945 | 0.9945 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
yokoe/xlm-roberta-base-finetuned-panx-de
|
yokoe
| 2022-08-10T03:41:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-10T03:13:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Lvxue/distilled-mt5-small-1b0000
|
Lvxue
| 2022-08-10T03:41:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-10T02:23:44Z |
---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-1b0000
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 1.1101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1b0000
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7760
- Bleu: 1.1101
- Gen Len: 99.5898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lvxue/distilled-mt5-small-010099_8
|
Lvxue
| 2022-08-10T03:32:16Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-10T02:24:27Z |
---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099_8
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 6.231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099_8
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9641
- Bleu: 6.231
- Gen Len: 50.1911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lvxue/distilled-mt5-small-010099_1
|
Lvxue
| 2022-08-10T03:29:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-10T02:20:53Z |
---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099_1
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3454
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099_1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8040
- Bleu: 7.3454
- Gen Len: 44.8149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SmartPy/xlm-roberta-base-finetuned-my_dear_watson2
|
SmartPy
| 2022-08-10T03:10:11Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-10T02:49:36Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-my_dear_watson2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-my_dear_watson2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sumba/covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess2
|
sumba
| 2022-08-10T02:05:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T08:49:30Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-no_description-stance-loss-hyp-unprocess2
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5816
- Accuracy: 0.0901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4275469935864394e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8511 | 1.0 | 700 | 0.6372 | 0.1478 |
| 0.6146 | 2.0 | 1400 | 0.5816 | 0.0901 |
| 0.365 | 3.0 | 2100 | 0.6170 | 0.0749 |
| 0.2686 | 4.0 | 2800 | 0.7259 | 0.0688 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
elopezlopez/Bio_ClinicalBERT_fold_10_ternary_v1
|
elopezlopez
| 2022-08-10T02:02:59Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-10T01:40:37Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_10_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_10_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0706
- F1: 0.7748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.6097 | 0.7290 |
| 0.555 | 2.0 | 580 | 0.6106 | 0.7649 |
| 0.555 | 3.0 | 870 | 0.6608 | 0.7847 |
| 0.2449 | 4.0 | 1160 | 0.8894 | 0.7809 |
| 0.2449 | 5.0 | 1450 | 1.1049 | 0.7760 |
| 0.1055 | 6.0 | 1740 | 1.2951 | 0.7884 |
| 0.0338 | 7.0 | 2030 | 1.4809 | 0.7760 |
| 0.0338 | 8.0 | 2320 | 1.4751 | 0.7698 |
| 0.0225 | 9.0 | 2610 | 1.6648 | 0.7809 |
| 0.0225 | 10.0 | 2900 | 1.7174 | 0.7772 |
| 0.006 | 11.0 | 3190 | 1.7872 | 0.7735 |
| 0.006 | 12.0 | 3480 | 1.7803 | 0.7748 |
| 0.0161 | 13.0 | 3770 | 1.9302 | 0.7735 |
| 0.0005 | 14.0 | 4060 | 1.9853 | 0.7748 |
| 0.0005 | 15.0 | 4350 | 2.0043 | 0.7735 |
| 0.0062 | 16.0 | 4640 | 1.9969 | 0.7760 |
| 0.0062 | 17.0 | 4930 | 2.0173 | 0.7760 |
| 0.0068 | 18.0 | 5220 | 1.9891 | 0.7785 |
| 0.0034 | 19.0 | 5510 | 1.9951 | 0.7797 |
| 0.0034 | 20.0 | 5800 | 2.0283 | 0.7748 |
| 0.0049 | 21.0 | 6090 | 1.9985 | 0.7834 |
| 0.0049 | 22.0 | 6380 | 2.0131 | 0.7760 |
| 0.0011 | 23.0 | 6670 | 2.0526 | 0.7748 |
| 0.0011 | 24.0 | 6960 | 2.0662 | 0.7748 |
| 0.001 | 25.0 | 7250 | 2.0706 | 0.7748 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/Bio_ClinicalBERT_fold_9_ternary_v1
|
elopezlopez
| 2022-08-10T01:39:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-10T01:16:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_9_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_9_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0189
- F1: 0.7905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 292 | 0.5758 | 0.7571 |
| 0.5482 | 2.0 | 584 | 0.6282 | 0.7609 |
| 0.5482 | 3.0 | 876 | 0.6823 | 0.7841 |
| 0.2346 | 4.0 | 1168 | 0.9898 | 0.7776 |
| 0.2346 | 5.0 | 1460 | 1.1397 | 0.7866 |
| 0.1001 | 6.0 | 1752 | 1.3832 | 0.7751 |
| 0.0447 | 7.0 | 2044 | 1.6002 | 0.7674 |
| 0.0447 | 8.0 | 2336 | 1.7265 | 0.7584 |
| 0.0171 | 9.0 | 2628 | 1.6650 | 0.7699 |
| 0.0171 | 10.0 | 2920 | 1.7322 | 0.7661 |
| 0.0156 | 11.0 | 3212 | 1.8071 | 0.7789 |
| 0.012 | 12.0 | 3504 | 1.8322 | 0.7841 |
| 0.012 | 13.0 | 3796 | 1.8948 | 0.7763 |
| 0.01 | 14.0 | 4088 | 1.7667 | 0.7918 |
| 0.01 | 15.0 | 4380 | 1.8538 | 0.7879 |
| 0.0063 | 16.0 | 4672 | 1.9763 | 0.7776 |
| 0.0063 | 17.0 | 4964 | 1.9970 | 0.7841 |
| 0.0028 | 18.0 | 5256 | 1.9366 | 0.7931 |
| 0.0003 | 19.0 | 5548 | 1.9709 | 0.7892 |
| 0.0003 | 20.0 | 5840 | 1.9460 | 0.7879 |
| 0.0044 | 21.0 | 6132 | 2.0280 | 0.7866 |
| 0.0044 | 22.0 | 6424 | 1.9423 | 0.7918 |
| 0.0013 | 23.0 | 6716 | 1.9618 | 0.7918 |
| 0.004 | 24.0 | 7008 | 2.0241 | 0.7905 |
| 0.004 | 25.0 | 7300 | 2.0189 | 0.7905 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/3-way-detection-prop-16
|
ultra-coder54732
| 2022-08-10T01:37:14Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-08T20:58:37Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: 3-way-detection-prop-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3-way-detection-prop-16
This model is a fine-tuned version of [ultra-coder54732/3-way-detection-prop-16](https://huggingface.co/ultra-coder54732/3-way-detection-prop-16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/Bio_ClinicalBERT_fold_7_ternary_v1
|
elopezlopez
| 2022-08-10T00:52:48Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-10T00:30:26Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_7_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_7_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9612
- F1: 0.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.5762 | 0.7593 |
| 0.5434 | 2.0 | 582 | 0.5577 | 0.7939 |
| 0.5434 | 3.0 | 873 | 0.6501 | 0.7951 |
| 0.2198 | 4.0 | 1164 | 0.8661 | 0.7939 |
| 0.2198 | 5.0 | 1455 | 1.1493 | 0.7900 |
| 0.0953 | 6.0 | 1746 | 1.1999 | 0.7977 |
| 0.0375 | 7.0 | 2037 | 1.4623 | 0.7759 |
| 0.0375 | 8.0 | 2328 | 1.4526 | 0.7900 |
| 0.0246 | 9.0 | 2619 | 1.6915 | 0.7734 |
| 0.0246 | 10.0 | 2910 | 1.6097 | 0.7913 |
| 0.0113 | 11.0 | 3201 | 1.7091 | 0.8015 |
| 0.0113 | 12.0 | 3492 | 1.7252 | 0.7990 |
| 0.0103 | 13.0 | 3783 | 1.7305 | 0.8015 |
| 0.0079 | 14.0 | 4074 | 1.7932 | 0.8003 |
| 0.0079 | 15.0 | 4365 | 1.7800 | 0.8028 |
| 0.0071 | 16.0 | 4656 | 1.7000 | 0.7977 |
| 0.0071 | 17.0 | 4947 | 1.8342 | 0.8003 |
| 0.0077 | 18.0 | 5238 | 1.8517 | 0.7990 |
| 0.0044 | 19.0 | 5529 | 1.8633 | 0.7964 |
| 0.0044 | 20.0 | 5820 | 1.8813 | 0.7926 |
| 0.0028 | 21.0 | 6111 | 1.8914 | 0.7964 |
| 0.0028 | 22.0 | 6402 | 1.9412 | 0.7926 |
| 0.0043 | 23.0 | 6693 | 1.9760 | 0.7939 |
| 0.0043 | 24.0 | 6984 | 1.9509 | 0.7977 |
| 0.0002 | 25.0 | 7275 | 1.9612 | 0.7939 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
johngiorgi/declutr-base
|
johngiorgi
| 2022-08-10T00:36:40Z | 49 | 7 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:openwebtext",
"arxiv:2006.03659",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- openwebtext
---
# DeCLUTR-base
## Model description
The "DeCLUTR-base" model from our paper: [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-base")
# Prepare some text to embed
texts = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-base")
model = AutoModel.from_pretrained("johngiorgi/declutr-base")
# Prepare some text to embed
text = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
```
|
johngiorgi/declutr-sci-base
|
johngiorgi
| 2022-08-10T00:35:23Z | 65 | 9 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"arxiv:2006.03659",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
---
# DeCLUTR-sci-base
## Model description
This is the [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model, with extended pretraining on over 2 million scientific papers from [S2ORC](https://github.com/allenai/s2orc/) using the self-supervised training strategy presented in [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers). It is particularly suitable for scientific text.
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-sci-base")
# Prepare some text to embed
text = [
"Oncogenic KRAS mutations are common in cancer.",
"Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-sci-base")
model = AutoModel.from_pretrained("johngiorgi/declutr-sci-base")
# Prepare some text to embed
text = [
"Oncogenic KRAS mutations are common in cancer.",
"Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
```
|
Tstarshak/testpyramidsrnd
|
Tstarshak
| 2022-08-10T00:20:31Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-08-10T00:20:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Tstarshak/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
elopezlopez/Bio_ClinicalBERT_fold_5_ternary_v1
|
elopezlopez
| 2022-08-10T00:07:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T23:45:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_5_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_5_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0233
- F1: 0.7849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.6352 | 0.7286 |
| 0.5477 | 2.0 | 582 | 0.5965 | 0.7682 |
| 0.5477 | 3.0 | 873 | 0.7696 | 0.7554 |
| 0.2383 | 4.0 | 1164 | 1.0119 | 0.7631 |
| 0.2383 | 5.0 | 1455 | 1.1300 | 0.7772 |
| 0.1068 | 6.0 | 1746 | 1.3515 | 0.7734 |
| 0.0401 | 7.0 | 2037 | 1.4935 | 0.7721 |
| 0.0401 | 8.0 | 2328 | 1.5418 | 0.7875 |
| 0.0213 | 9.0 | 2619 | 1.6902 | 0.7746 |
| 0.0213 | 10.0 | 2910 | 1.7091 | 0.7721 |
| 0.014 | 11.0 | 3201 | 1.7422 | 0.7836 |
| 0.014 | 12.0 | 3492 | 1.8603 | 0.7772 |
| 0.012 | 13.0 | 3783 | 1.8419 | 0.7734 |
| 0.0104 | 14.0 | 4074 | 1.9616 | 0.7657 |
| 0.0104 | 15.0 | 4365 | 1.9342 | 0.7823 |
| 0.005 | 16.0 | 4656 | 1.9646 | 0.7746 |
| 0.005 | 17.0 | 4947 | 1.9943 | 0.7772 |
| 0.0075 | 18.0 | 5238 | 1.9882 | 0.7798 |
| 0.0071 | 19.0 | 5529 | 1.9909 | 0.7849 |
| 0.0071 | 20.0 | 5820 | 2.0568 | 0.7798 |
| 0.0029 | 21.0 | 6111 | 2.0508 | 0.7759 |
| 0.0029 | 22.0 | 6402 | 2.0267 | 0.7823 |
| 0.0047 | 23.0 | 6693 | 2.0534 | 0.7785 |
| 0.0047 | 24.0 | 6984 | 2.0336 | 0.7849 |
| 0.0014 | 25.0 | 7275 | 2.0233 | 0.7849 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BrianT/distilbert-base-uncased-finetuned-cola
|
BrianT
| 2022-08-09T23:23:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T21:45:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5474713423103301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5254
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5360 | 0.4307 |
| 0.3491 | 2.0 | 1070 | 0.5128 | 0.4972 |
| 0.2382 | 3.0 | 1605 | 0.5254 | 0.5475 |
| 0.1756 | 4.0 | 2140 | 0.7479 | 0.5330 |
| 0.1248 | 5.0 | 2675 | 0.7978 | 0.5414 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
aujer/not_interested_v0
|
aujer
| 2022-08-09T22:30:28Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:aujer/autotrain-data-not_interested_3",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T22:28:49Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aujer/autotrain-data-not_interested_3
co2_eq_emissions:
emissions: 2.307650736568978
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1235146886
- CO2 Emissions (in grams): 2.3077
## Validation Metrics
- Loss: 0.802
- Accuracy: 0.788
- Macro F1: 0.743
- Micro F1: 0.788
- Weighted F1: 0.782
- Macro Precision: 0.818
- Micro Precision: 0.788
- Weighted Precision: 0.796
- Macro Recall: 0.722
- Micro Recall: 0.788
- Weighted Recall: 0.788
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aujer/autotrain-not_interested_3-1235146886
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aujer/autotrain-not_interested_3-1235146886", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aujer/autotrain-not_interested_3-1235146886", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ericntay/clinical_bio_bert_ft
|
ericntay
| 2022-08-09T21:56:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-09T21:25:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: clinical_bio_bert_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical_bio_bert_ft
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2570
- F1: 0.8160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6327 | 1.0 | 95 | 0.2442 | 0.7096 |
| 0.1692 | 2.0 | 190 | 0.2050 | 0.7701 |
| 0.0878 | 3.0 | 285 | 0.1923 | 0.8002 |
| 0.0493 | 4.0 | 380 | 0.2234 | 0.8079 |
| 0.0302 | 5.0 | 475 | 0.2250 | 0.8090 |
| 0.0191 | 6.0 | 570 | 0.2363 | 0.8145 |
| 0.0132 | 7.0 | 665 | 0.2489 | 0.8178 |
| 0.0102 | 8.0 | 760 | 0.2494 | 0.8152 |
| 0.008 | 9.0 | 855 | 0.2542 | 0.8191 |
| 0.0068 | 10.0 | 950 | 0.2570 | 0.8160 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city
|
Ammar-alhaj-ali
| 2022-08-09T21:04:32Z | 586 | 10 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"text classification",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T19:10:53Z |
---
language:
- ar
widget:
- text: "ما شفت هدا العنوان هون"
- text: "مبقدرش احافظ علي المستوى الدراسي بتاعي"
- text: "منين تطلع بهاي السوالف كل يوم"
tags:
- text classification
---
## Arabic-MARBERT-dialect-Identification-City Model
#### Model description
**arabic-MARBERT-dialect-identification-city Model** is a dialect identification model that was built by fine-tuning the [MARBERT](https://huggingface.co/UBC-NLP/MARBERT) model. For the fine-tuning, I used [MADAR Corpus 26 dataset](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/), which includes 26 labels(cities).
#### How to use
To use the model with a transformers pipeline:
```python
>>>from transformers import pipeline
>>>model = pipeline('text-classification', model='Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city')
>>>sentences = ['ناطرين البرنامج', 'اكلنا هوا بهل شروة']
>>>model(sentences)
[{'label': 'Beirut', 'score': 0.9731963276863098},
{'label': 'Aleppo', 'score': 0.4592577815055847}]
|
elopezlopez/Bio_ClinicalBERT_fold_2_ternary_v1
|
elopezlopez
| 2022-08-09T20:59:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-04T11:38:09Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_2_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_2_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8186
- F1: 0.8038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.5629 | 0.7645 |
| 0.5579 | 2.0 | 588 | 0.5078 | 0.8078 |
| 0.5579 | 3.0 | 882 | 0.6622 | 0.7754 |
| 0.2341 | 4.0 | 1176 | 0.8584 | 0.7943 |
| 0.2341 | 5.0 | 1470 | 1.1953 | 0.7821 |
| 0.0942 | 6.0 | 1764 | 1.3193 | 0.7876 |
| 0.0338 | 7.0 | 2058 | 1.3324 | 0.7903 |
| 0.0338 | 8.0 | 2352 | 1.5043 | 0.7930 |
| 0.0202 | 9.0 | 2646 | 1.5255 | 0.7889 |
| 0.0202 | 10.0 | 2940 | 1.5382 | 0.7916 |
| 0.0119 | 11.0 | 3234 | 1.6377 | 0.7903 |
| 0.0051 | 12.0 | 3528 | 1.7349 | 0.7835 |
| 0.0051 | 13.0 | 3822 | 1.7297 | 0.7835 |
| 0.0082 | 14.0 | 4116 | 1.7817 | 0.7808 |
| 0.0082 | 15.0 | 4410 | 1.7105 | 0.7970 |
| 0.0054 | 16.0 | 4704 | 1.7325 | 0.7984 |
| 0.0054 | 17.0 | 4998 | 1.7919 | 0.7943 |
| 0.0049 | 18.0 | 5292 | 1.8850 | 0.7876 |
| 0.0045 | 19.0 | 5586 | 1.8237 | 0.7916 |
| 0.0045 | 20.0 | 5880 | 1.8760 | 0.7970 |
| 0.0024 | 21.0 | 6174 | 1.8544 | 0.7984 |
| 0.0024 | 22.0 | 6468 | 1.7852 | 0.8011 |
| 0.0005 | 23.0 | 6762 | 1.7795 | 0.8065 |
| 0.0031 | 24.0 | 7056 | 1.7978 | 0.7997 |
| 0.0031 | 25.0 | 7350 | 1.8186 | 0.8038 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Mozart-coder/dna_bert_3_2-finetuned
|
Mozart-coder
| 2022-08-09T20:08:04Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-09T18:50:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: dna_bert_3_2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dna_bert_3_2-finetuned
This model is a fine-tuned version of [armheb/DNA_bert_3](https://huggingface.co/armheb/DNA_bert_3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8974 | 1.0 | 62 | 0.6160 |
| 0.6057 | 2.0 | 124 | 0.6000 |
| 0.5957 | 3.0 | 186 | 0.5897 |
| 0.5883 | 4.0 | 248 | 0.5873 |
| 0.5844 | 5.0 | 310 | 0.5843 |
| 0.5812 | 6.0 | 372 | 0.5811 |
| 0.5812 | 7.0 | 434 | 0.5832 |
| 0.5769 | 8.0 | 496 | 0.5773 |
| 0.5727 | 9.0 | 558 | 0.5771 |
| 0.5702 | 10.0 | 620 | 0.5772 |
| 0.5673 | 11.0 | 682 | 0.5771 |
| 0.5663 | 12.0 | 744 | 0.5769 |
| 0.5569 | 13.0 | 806 | 0.5731 |
| 0.5518 | 14.0 | 868 | 0.5731 |
| 0.5486 | 15.0 | 930 | 0.5728 |
| 0.544 | 16.0 | 992 | 0.5683 |
| 0.5336 | 17.0 | 1054 | 0.5694 |
| 0.5245 | 18.0 | 1116 | 0.5639 |
| 0.5162 | 19.0 | 1178 | 0.5641 |
| 0.5057 | 20.0 | 1240 | 0.5626 |
| 0.4966 | 21.0 | 1302 | 0.5612 |
| 0.4859 | 22.0 | 1364 | 0.5492 |
| 0.4781 | 23.0 | 1426 | 0.5470 |
| 0.4601 | 24.0 | 1488 | 0.5399 |
| 0.4523 | 25.0 | 1550 | 0.5424 |
| 0.4432 | 26.0 | 1612 | 0.5328 |
| 0.4341 | 27.0 | 1674 | 0.5336 |
| 0.4183 | 28.0 | 1736 | 0.5315 |
| 0.4133 | 29.0 | 1798 | 0.5268 |
| 0.4111 | 30.0 | 1860 | 0.5256 |
| 0.3919 | 31.0 | 1922 | 0.5155 |
| 0.3899 | 32.0 | 1984 | 0.5179 |
| 0.3804 | 33.0 | 2046 | 0.5145 |
| 0.368 | 34.0 | 2108 | 0.5189 |
| 0.3603 | 35.0 | 2170 | 0.5081 |
| 0.3602 | 36.0 | 2232 | 0.5098 |
| 0.352 | 37.0 | 2294 | 0.5054 |
| 0.3468 | 38.0 | 2356 | 0.5024 |
| 0.3359 | 39.0 | 2418 | 0.5053 |
| 0.3342 | 40.0 | 2480 | 0.5031 |
| 0.3294 | 41.0 | 2542 | 0.4978 |
| 0.3158 | 42.0 | 2604 | 0.4923 |
| 0.3191 | 43.0 | 2666 | 0.4944 |
| 0.3122 | 44.0 | 2728 | 0.4970 |
| 0.3084 | 45.0 | 2790 | 0.4910 |
| 0.2978 | 46.0 | 2852 | 0.4898 |
| 0.3012 | 47.0 | 2914 | 0.4880 |
| 0.2938 | 48.0 | 2976 | 0.4924 |
| 0.2932 | 49.0 | 3038 | 0.4879 |
| 0.2842 | 50.0 | 3100 | 0.4847 |
| 0.2828 | 51.0 | 3162 | 0.4849 |
| 0.2793 | 52.0 | 3224 | 0.4767 |
| 0.2753 | 53.0 | 3286 | 0.4796 |
| 0.2725 | 54.0 | 3348 | 0.4829 |
| 0.2695 | 55.0 | 3410 | 0.4831 |
| 0.2671 | 56.0 | 3472 | 0.4791 |
| 0.2664 | 57.0 | 3534 | 0.4791 |
| 0.2563 | 58.0 | 3596 | 0.4765 |
| 0.2583 | 59.0 | 3658 | 0.4742 |
| 0.2535 | 60.0 | 3720 | 0.4766 |
| 0.2496 | 61.0 | 3782 | 0.4741 |
| 0.2489 | 62.0 | 3844 | 0.4766 |
| 0.2444 | 63.0 | 3906 | 0.4748 |
| 0.2417 | 64.0 | 3968 | 0.4768 |
| 0.2422 | 65.0 | 4030 | 0.4727 |
| 0.2404 | 66.0 | 4092 | 0.4729 |
| 0.2405 | 67.0 | 4154 | 0.4744 |
| 0.2353 | 68.0 | 4216 | 0.4729 |
| 0.2307 | 69.0 | 4278 | 0.4705 |
| 0.2281 | 70.0 | 4340 | 0.4717 |
| 0.232 | 71.0 | 4402 | 0.4719 |
| 0.2313 | 72.0 | 4464 | 0.4713 |
| 0.2266 | 73.0 | 4526 | 0.4726 |
| 0.2241 | 74.0 | 4588 | 0.4675 |
| 0.2256 | 75.0 | 4650 | 0.4688 |
| 0.2299 | 76.0 | 4712 | 0.4713 |
| 0.2199 | 77.0 | 4774 | 0.4720 |
| 0.2228 | 78.0 | 4836 | 0.4682 |
| 0.2261 | 79.0 | 4898 | 0.4676 |
| 0.2167 | 80.0 | 4960 | 0.4685 |
| 0.2126 | 81.0 | 5022 | 0.4676 |
| 0.2217 | 82.0 | 5084 | 0.4672 |
| 0.216 | 83.0 | 5146 | 0.4672 |
| 0.2152 | 84.0 | 5208 | 0.4682 |
| 0.219 | 85.0 | 5270 | 0.4663 |
| 0.2135 | 86.0 | 5332 | 0.4655 |
| 0.2046 | 87.0 | 5394 | 0.4644 |
| 0.2177 | 88.0 | 5456 | 0.4679 |
| 0.2052 | 89.0 | 5518 | 0.4659 |
| 0.2147 | 90.0 | 5580 | 0.4665 |
| 0.211 | 91.0 | 5642 | 0.4668 |
| 0.2089 | 92.0 | 5704 | 0.4649 |
| 0.2149 | 93.0 | 5766 | 0.4651 |
| 0.2034 | 94.0 | 5828 | 0.4689 |
| 0.2071 | 95.0 | 5890 | 0.4659 |
| 0.2145 | 96.0 | 5952 | 0.4664 |
| 0.2036 | 97.0 | 6014 | 0.4661 |
| 0.2092 | 98.0 | 6076 | 0.4676 |
| 0.2079 | 99.0 | 6138 | 0.4667 |
| 0.2081 | 100.0 | 6200 | 0.4668 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
KLeedrug/EMO_demo_00
|
KLeedrug
| 2022-08-09T19:17:22Z | 0 | 0 | null |
[
"text-classification",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2022-08-09T16:26:58Z |
---
license: apache-2.0
tags:
- text-classification
widget:
- text: "This love has taken its toll on me"
example_title: "sadness"
---
# EMO demo 00
## TODO
### incorporate with EMO_AI
### put pretrained weight here
|
asvs/qs-classifier
|
asvs
| 2022-08-09T18:54:13Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T18:00:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: qs-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qs-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 447 | 0.0416 | 0.9910 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
href/gpt2-schiappa
|
href
| 2022-08-09T17:51:47Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-09T17:30:28Z |
---
license: unknown
---
# Schiappa-Minelli GPT-2
Pourquoi pas ?
## Dataset
- Marianne est déchainée, de Marlène Schiappa
- Osez les sexfriends, Marie Minelli
- Osez réussir votre divorce, Marie Minelli
- Sexe, mensonge et banlieues chaudes, Marie Minelli
## Versions
V1:
- Fine-tunée avec [Max Woolf's "aitextgen — Train a GPT-2 (or GPT Neo)" colab](https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing)
- Depuis le modèle gpt-2 124M [aquadzn/gpt2-french](https://github.com/aquadzn/gpt2-french/), version romans.
- ~50 minutes on Colab Pro, P100 GPU, 3 batchs, 500 steps
|
NovelAI/genji-jp
|
NovelAI
| 2022-08-09T17:36:02Z | 28 | 52 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"ja",
"en",
"arxiv:2104.09864",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language:
- ja
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
---
# Genji-JP 6B
Please check our blog post for more details, samples, evaluations and more:
[Blogpost](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a)
## Model Description
Genji-JP 6B is a model finetuned on our Japanese storytelling dataset based on EleutherAI's GPT-J 6B model. This particular model is trained on Japanese web novels.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on our Japanese storytelling dataset. Check our blog post for more details.
### How to use
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-jp", torch_dtype=torch.float16, low_cpu_mem_usage=True).eval().cuda()
text = '''あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう!
***
転生すると、ある能力を手に入れていた。それは、'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, temperature=1, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0]
generated_text = tokenizer.decode(last_tokens).replace("�", "")
print("Generation:\n" + generated_text)
```
When run, produces output like this:
```
Generation:
あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう!
***
転生すると、ある能力を手に入れていた。それは、『予知』だ。過去から未来のことを、誰も知らない出来事も含めて見通すことが出来る。
悪魔の欠片と呼ばれる小さな結晶を取り込んで、使役することが出来る。人を惹きつけ、堕落させる。何より、俺は男なんて居なかったし、女に興味もない。……そんなクズの片棒を担ぎ上げる奴が多くなると思うと、ちょっと苦しい。
だが、一部の人間には協力者を得ることが出来る。目立たない街にある寺の中で、常に家に引きこもっている老人。そんなヤツの魂をコントロールすることが出来るのだ。便利な能力だ。しかし、裏切り者は大勢いる。気を抜けば、狂う。だから注意が必要だ。
――「やってやるよ」
アーロンは不敵に笑った。この
```
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/)
Thanks [EleutherAI](https://eleuther.ai/) for pretraining the GPT-J 6B model.
Thanks to everyone who contributed to this project!
- [Finetune](https://github.com/finetuneanon)
- [Aero](https://github.com/AeroScripts)
- [Kurumuz](https://github.com/kurumuz)
|
LilOpa/LunarLanderPPO_new-2
|
LilOpa
| 2022-08-09T17:26:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T17:26:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 256.20 +/- 33.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jinchen/bart-base-finetuned-en-to-ro
|
Jinchen
| 2022-08-09T17:21:22Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-09T13:38:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: bart-base-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-en-to-ro
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5409 | 1.0 | 1192 | 1.9912 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.10.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SmartPy/xlm-roberta-base-finetuned-my_dear_watson
|
SmartPy
| 2022-08-09T17:19:29Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-09T16:57:36Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-my_dear_watson
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-my_dear_watson
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7977 | 1.0 | 240 | 1.9607 |
| 2.0249 | 2.0 | 480 | 1.8608 |
| 1.9661 | 3.0 | 720 | 1.8150 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AG16/ppo-LunarLander-v2
|
AG16
| 2022-08-09T17:13:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T17:13:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 180.88 +/- 15.22
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
spacemanidol/esci-jp-mpnet-crossencoder
|
spacemanidol
| 2022-08-09T16:27:15Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-08-09T16:21:34Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6369 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
spacemanidol/esci-es-mpnet-crossencoder
|
spacemanidol
| 2022-08-09T16:26:40Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-08-09T16:21:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4643 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Rocketknight1/student_marian_en_ro_6_1
|
Rocketknight1
| 2022-08-09T15:46:13Z | 15 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-09T15:42:56Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: student_marian_en_ro_6_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# student_marian_en_ro_6_1
This model is a fine-tuned version of [sshleifer/student_marian_en_ro_6_1](https://huggingface.co/sshleifer/student_marian_en_ro_6_1) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- TensorFlow 2.9.1
- Datasets 2.4.1.dev0
- Tokenizers 0.11.0
|
HUPD/hupd-distilroberta-base
|
HUPD
| 2022-08-09T15:22:00Z | 31 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"hupd",
"distilroberta",
"patents",
"en",
"dataset:HUPD/hupd",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-05T07:41:29Z |
---
language:
- en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- hupd
- roberta
- distilroberta
- patents
license: cc-by-sa-4.0
datasets:
- HUPD/hupd
---
# HUPD DistilRoBERTa-Base Model
This HUPD DistilRoBERTa model was fine-tuned on the HUPD dataset with a masked language modeling objective. It was originally introduced in [this paper](TBD).
For more information about the Harvard USPTO Patent Dataset, please feel free to visit the [project website](https://patentdataset.org/) or the [project's GitHub repository](https://github.com/suzgunmirac/hupd).
### How to Use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
model = pipeline(task="fill-mask", model="hupd/hupd-distilroberta-base")
model("Improved <mask> for playing a game of thumb wrestling.")
```
Here is the output:
```python
[{'score': 0.4274042248725891,
'sequence': 'Improved method for playing a game of thumb wrestling.',
'token': 5448,
'token_str': ' method'},
{'score': 0.06967400759458542,
'sequence': 'Improved system for playing a game of thumb wrestling.',
'token': 467,
'token_str': ' system'},
{'score': 0.06849079579114914,
'sequence': 'Improved device for playing a game of thumb wrestling.',
'token': 2187,
'token_str': ' device'},
{'score': 0.04544765502214432,
'sequence': 'Improved apparatus for playing a game of thumb wrestling.',
'token': 26529,
'token_str': ' apparatus'},
{'score': 0.025765646249055862,
'sequence': 'Improved means for playing a game of thumb wrestling.',
'token': 839,
'token_str': ' means'}]
```
Alternatively, you can load the model and use it as follows:
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
# cuda/cpu
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("hupd/hupd-distilroberta-base")
model = AutoModelForMaskedLM.from_pretrained("hupd/hupd-distilroberta-base").to(device)
TEXT = "Improved <mask> for playing a game of thumb wrestling."
inputs = tokenizer(TEXT, return_tensors="pt").to(device)
with torch.no_grad():
logits = model(**inputs).logits
# retrieve indices of <mask>
mask_token_indxs = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
for mask_idx in mask_token_indxs:
predicted_token_id = logits[0, mask_idx].argmax(axis=-1)
output = tokenizer.decode(predicted_token_id)
print(f'Prediction for the <mask> token at index {mask_idx}: "{output}"')
```
Here is the output:
```python
Prediction for the <mask> token at index 2: " method"
```
## Citation
For more information, please take a look at the original paper.
* Paper: [The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications](TBD)
* Authors: *Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber*
* BibTeX:
```
@article{suzgun2022hupd,
title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K and Kominers, Scott and Shieber, Stuart},
year={2022}
}
```
|
workRL/cleanPPOLunar
|
workRL
| 2022-08-09T14:22:58Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T14:22:51Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -121.56 +/- 32.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'workRL/cleanPPOLunar'
'batch_size': 512
'minibatch_size': 128}
```
|
scott-ml/ppo-lunarlander-v2
|
scott-ml
| 2022-08-09T13:15:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T12:44:02Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy_ppo
results:
- metrics:
- type: mean_reward
value: 229.65 +/- 24.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **MlpPolicy_ppo** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy_ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
annahaz/xlm-roberta-base-misogyny-sexism-decay0.05-fr-indomain
|
annahaz
| 2022-08-09T12:08:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T11:09:06Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-decay0.05-fr-indomain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-decay0.05-fr-indomain
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2200
- Accuracy: 0.8708
- F1: 0.0040
- Precision: 0.1
- Recall: 0.0021
- Mae: 0.1292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.2085 | 1.0 | 2302 | 1.0000 | 0.8674 | 0.0212 | 0.1719 | 0.0113 | 0.1326 |
| 0.18 | 2.0 | 4604 | 1.0566 | 0.8527 | 0.0157 | 0.0520 | 0.0093 | 0.1473 |
| 0.1614 | 3.0 | 6906 | 1.1284 | 0.8673 | 0.0020 | 0.0222 | 0.0010 | 0.1327 |
| 0.1428 | 4.0 | 9208 | 1.1329 | 0.8714 | 0.0020 | 0.0714 | 0.0010 | 0.1286 |
| 0.1467 | 5.0 | 11510 | 1.1814 | 0.8708 | 0.0040 | 0.1 | 0.0021 | 0.1292 |
| 0.1375 | 6.0 | 13812 | 1.2020 | 0.8706 | 0.0020 | 0.05 | 0.0010 | 0.1294 |
| 0.1093 | 7.0 | 16114 | 1.2200 | 0.8708 | 0.0040 | 0.1 | 0.0021 | 0.1292 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DennisSoemers/q-Taxi-v3
|
DennisSoemers
| 2022-08-09T11:54:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T11:54:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="DennisSoemers/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
DennisSoemers/q-FrozenLake-v1-4x4-noSlippery
|
DennisSoemers
| 2022-08-09T11:51:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T11:51:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DennisSoemers/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
nvidia/segformer-b2-finetuned-cityscapes-1024-1024
|
nvidia
| 2022-08-09T11:34:43Z | 2,153 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: Road
---
# SegFormer (b2-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
nvidia/segformer-b0-finetuned-cityscapes-512-1024
|
nvidia
| 2022-08-09T11:34:31Z | 679 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: road
---
# SegFormer (b4-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 512x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-512-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
nvidia/segformer-b0-finetuned-cityscapes-640-1280
|
nvidia
| 2022-08-09T11:33:34Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: road
---
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 640x1280. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
nvidia/segformer-b0-finetuned-cityscapes-768-768
|
nvidia
| 2022-08-09T11:33:19Z | 389 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: Road
---
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 768x768. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
nvidia/segformer-b1-finetuned-cityscapes-1024-1024
|
nvidia
| 2022-08-09T11:33:04Z | 10,511 | 13 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: Road
---
# SegFormer (b1-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm500_aug2-3
|
dminiotas05
| 2022-08-09T11:07:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T10:06:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft1500_norm500_aug2-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm500_aug2-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5766
- Mse: 5.1532
- Mae: 1.3526
- R2: -0.0072
- Accuracy: 0.4734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:--------:|
| 1.0562 | 1.0 | 15533 | 2.5766 | 5.1532 | 1.3526 | -0.0072 | 0.4734 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Mahmoud7/dqn-SpaceInvadersNoFrameskip-v4
|
Mahmoud7
| 2022-08-09T09:51:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T09:50:56Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mahmoud7 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mahmoud7
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Jinchen/t5-small-finetuned-xsum
|
Jinchen
| 2022-08-09T09:05:05Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T15:30:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8115 | 1.0 | 3188 | 2.5273 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+rocm4.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
apurva19/q-Taxi-v3
|
apurva19
| 2022-08-09T09:02:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T08:52:14Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="apurva19/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
apurva19/q-FrozenLake-v1-4x4-noSlippery
|
apurva19
| 2022-08-09T08:24:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T08:15:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="apurva19/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
zboxi7/finetuning-sentiment-model-3000-samples_fr
|
zboxi7
| 2022-08-09T07:01:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T06:50:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_fr
This model is a fine-tuned version of [zboxi7/finetuning-sentiment-model-3000-samples](https://huggingface.co/zboxi7/finetuning-sentiment-model-3000-samples) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Accuracy: 0.7033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/relation-detection-prop-16-train-set
|
ultra-coder54732
| 2022-08-09T06:49:43Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T05:47:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: relation-detection-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relation-detection-prop-16-train-set
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
parnyanp/distilbert-base-uncased-finetuned-emotion
|
parnyanp
| 2022-08-09T06:31:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-06T06:44:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9274815041868594
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2182
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8403 | 1.0 | 250 | 0.3135 | 0.9065 | 0.9031 |
| 0.2525 | 2.0 | 500 | 0.2182 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
zboxi7/finetuning-sentiment-model-3000-samples
|
zboxi7
| 2022-08-09T06:30:22Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-06T19:21:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1460
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
apurva19/ppo-LunarLander-v2
|
apurva19
| 2022-08-09T06:24:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-03T16:45:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 257.16 +/- 17.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xusysh/ppo-LunarLander-v2
|
xusysh
| 2022-08-09T06:08:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-09T06:06:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 182.89 +/- 52.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alex-apostolo/legal-bert-base-cuad
|
alex-apostolo
| 2022-08-09T04:46:44Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:cuad",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-07T09:36:34Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- cuad
model-index:
- name: legal-bert-base-uncased-filtered-cuad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased-filtered-cuad
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0394 | 1.0 | 31626 | 0.0265 |
| 0.0272 | 2.0 | 63252 | 0.0237 |
| 0.021 | 3.0 | 94878 | 0.0259 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Farshid/roberta-large-financial-phrasebank-allagree1
|
Farshid
| 2022-08-09T02:16:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-08T19:16:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-financial-phrasebank-allagree1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_allagree
split: train
args: sentences_allagree
metrics:
- name: Accuracy
type: accuracy
value: 0.9734513274336283
- name: F1
type: f1
value: 0.9736033872259027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-financial-phrasebank-allagree1
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1417
- Accuracy: 0.9735
- F1: 0.9736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.503 | 1.0 | 227 | 0.2774 | 0.9513 | 0.9517 |
| 0.177 | 2.0 | 454 | 0.1518 | 0.9779 | 0.9778 |
| 0.0789 | 3.0 | 681 | 0.1364 | 0.9823 | 0.9822 |
| 0.0512 | 4.0 | 908 | 0.1131 | 0.9779 | 0.9778 |
| 0.03 | 5.0 | 1135 | 0.1417 | 0.9735 | 0.9736 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
deploy-hf-tf-vit/vit-base16-extended
|
deploy-hf-tf-vit
| 2022-08-09T01:43:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-08-08T14:59:35Z |
---
license: apache-2.0
---
This repository houses an extended version of the [ViT Base/16 model from 🤗 Transformers](https://huggingface.co/docs/transformers/main/en/model_doc/vit). In particular, it provides the following:
* A `SavedModel` that has the preprocessing and postprocessing operations embedded inside the computation graph of the model.
* A `tar` archive of the SavedModel.
Please refer to the following blog post to know how the SavedModel was exported: [Deploying TensorFlow Vision Models in Hugging Face with TF Serving](https://huggingface.co/blog/tf-serving-vision).
|
SharpAI/mal-tls-t5-l12
|
SharpAI
| 2022-08-09T01:16:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T23:48:12Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mal-tls-t5-l12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal-tls-t5-l12
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TheLitttleThings/clip-archdaily-5k
|
TheLitttleThings
| 2022-08-09T01:14:34Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2022-08-05T12:31:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: clip-archdaily-5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-archdaily-5k
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 56
- eval_batch_size: 56
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.533 | 0.34 | 200 | 2.4607 |
| 2.1012 | 0.68 | 400 | 1.9922 |
| 1.6059 | 1.02 | 600 | 1.7986 |
| 1.4557 | 1.36 | 800 | 1.6130 |
| 1.4268 | 1.7 | 1000 | 1.4073 |
| 0.8588 | 2.04 | 1200 | 1.2657 |
| 0.8191 | 2.38 | 1400 | 1.1214 |
| 0.81 | 2.72 | 1600 | 1.0418 |
| 0.5546 | 3.06 | 1800 | 0.9735 |
| 0.4905 | 3.4 | 2000 | 0.9006 |
| 0.5209 | 3.74 | 2200 | 0.8762 |
| 0.3127 | 4.08 | 2400 | 0.8457 |
| 0.3145 | 4.42 | 2600 | 0.7886 |
| 0.3265 | 4.76 | 2800 | 0.7853 |
| 0.2215 | 5.1 | 3000 | 0.7309 |
| 0.2351 | 5.44 | 3200 | 0.7082 |
| 0.2332 | 5.78 | 3400 | 0.6770 |
| 0.1793 | 6.12 | 3600 | 0.6856 |
| 0.1617 | 6.46 | 3800 | 0.6470 |
| 0.1468 | 6.8 | 4000 | 0.6700 |
| 0.1293 | 7.14 | 4200 | 0.6460 |
| 0.1257 | 7.48 | 4400 | 0.6415 |
| 0.0975 | 7.82 | 4600 | 0.6454 |
| 0.0835 | 8.16 | 4800 | 0.6111 |
| 0.0856 | 8.5 | 5000 | 0.6124 |
| 0.0887 | 8.84 | 5200 | 0.5956 |
| 0.069 | 9.18 | 5400 | 0.5877 |
| 0.0625 | 9.52 | 5600 | 0.5798 |
| 0.0599 | 9.86 | 5800 | 0.5681 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
theojolliffe/bart-paraphrase-v8-e1
|
theojolliffe
| 2022-08-09T00:31:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T21:40:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-paraphrase-v8-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v8-e1
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1597
- Rouge1: 73.0494
- Rouge2: 70.2389
- Rougel: 72.0086
- Rougelsum: 72.1
- Gen Len: 19.7365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0312 | 1.0 | 28370 | 0.1597 | 73.0494 | 70.2389 | 72.0086 | 72.1 | 19.7365 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/mal-tls-t5-l3
|
SharpAI
| 2022-08-08T22:55:45Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T22:44:04Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mal-tls-t5-l3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal-tls-t5-l3
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-paraphrase-v4-e1
|
theojolliffe
| 2022-08-08T22:42:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-06T21:43:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-paraphrase-v4-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1318
- Rouge1: 73.1451
- Rouge2: 69.0788
- Rougel: 71.9928
- Rougelsum: 72.1526
- Gen Len: 19.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0476 | 1.0 | 14185 | 0.1318 | 73.1451 | 69.0788 | 71.9928 | 72.1526 | 19.3423 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/apesahoy-dril-dril9999-dril_gpt2-gptmicrofic-tanakhbot
|
huggingtweets
| 2022-08-08T21:52:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-08T21:48:49Z |
---
language: en
thumbnail: http://www.huggingtweets.com/apesahoy-dril-dril9999-dril_gpt2-gptmicrofic-tanakhbot/1659995519837/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1326378564187529216/a9fuWw48_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1261895681561804800/r6vOZGoH_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & tanakhbot & GPT2-Microfic & MORTIMUS COWBOY: The Bastard of Diapers & wint & wint but Al</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-dril-dril9999-dril_gpt2-gptmicrofic-tanakhbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & tanakhbot & GPT2-Microfic & MORTIMUS COWBOY: The Bastard of Diapers & wint & wint but Al.
| Data | Humongous Ape MP | tanakhbot | GPT2-Microfic | MORTIMUS COWBOY: The Bastard of Diapers | wint | wint but Al |
| --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3245 | 565 | 1158 | 3249 | 3226 | 3229 |
| Retweets | 197 | 0 | 11 | 0 | 497 | 47 |
| Short tweets | 610 | 1 | 9 | 143 | 287 | 57 |
| Tweets kept | 2438 | 564 | 1138 | 3106 | 2442 | 3125 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rmkgg2i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-dril-dril9999-dril_gpt2-gptmicrofic-tanakhbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6iovvvgz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6iovvvgz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-dril-dril9999-dril_gpt2-gptmicrofic-tanakhbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Kowsher/bangla-bert
|
Kowsher
| 2022-08-08T21:21:38Z | 20 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Bert base Bangla",
"Bengali Bert",
"Bengali lm",
"Bangla Base Bert",
"Bangla Bert language model",
"Bangla Bert",
"bn",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: bn
tags:
- Bert base Bangla
- Bengali Bert
- Bengali lm
- Bangla Base Bert
- Bangla Bert language model
- Bangla Bert
datasets:
- BanglaLM dataset
---
# Bangla BERT Base
Here we published a pretrained Bangla bert language model as **bangla-bert**! which is now available in huggingface model hub.
Here we described [bangla-bert](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert)
## Corpus Details
We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB.
After downloading the dataset, we went on the way to mask LM.
**bangla-bert Tokenizer**
```py
from transformers import AutoTokenizer, AutoModel
bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bangla-bert")
text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি"
bnbert_tokenizer.tokenize(text)
# output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি']
```
**MASK Generation**
here, we can use bert base bangla model as for masked language modeling:
```py
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained("Kowsher/bangla-bert")
tokenizer = BertTokenizer.from_pretrained("Kowsher/bangla-bert")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'তুই রাজাকার তুই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'}
```
**Cite this work**
M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, "Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.
## Author
[Kowsher](http://kowsher.org/)
|
DavidNovikov/ddpm-butterflies-128
|
DavidNovikov
| 2022-08-08T21:05:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-08T20:22:07Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/DavidNovikov/ddpm-butterflies-128/tensorboard?#scalars)
|
Izarel/distilbert-base-uncased_fine_tuned
|
Izarel
| 2022-08-08T20:58:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-30T21:14:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: distilbert-base-uncased_fine_tuned_title_and_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an reddit dataset -for NSFW classification.
It was trained on titles + body_text of submissions.
It achieves the following results on the evaluation set:
- Loss: 1.0159
- Accuracy: {'accuracy': 0.9095537914043252}
- Recall: {'recall': 0.8936873290793071}
- Precision: {'precision': 0.916024293389395}
- F1: {'f1': 0.9047179605490829}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:|
| 0.256 | 1.0 | 2284 | 0.2569 | {'accuracy': 0.9085683000273748} | {'recall': 0.8976754785779398} | {'precision': 0.9107514450867052} | {'f1': 0.9041661884540342} |
| 0.1948 | 2.0 | 4568 | 0.2471 | {'accuracy': 0.9138242540377771} | {'recall': 0.8644029170464904} | {'precision': 0.9518193224592221} | {'f1': 0.9060074047533739} |
| 0.1318 | 3.0 | 6852 | 0.3057 | {'accuracy': 0.914207500684369} | {'recall': 0.8977894257064722} | {'precision': 0.9216282606152767} | {'f1': 0.9095526695526697} |
| 0.0865 | 4.0 | 9136 | 0.4174 | {'accuracy': 0.9047358335614564} | {'recall': 0.8697584320875114} | {'precision': 0.9274605103280681} | {'f1': 0.8976831706456546} |
| 0.0545 | 5.0 | 11420 | 0.4635 | {'accuracy': 0.9095537914043252} | {'recall': 0.8849134001823155} | {'precision': 0.9236441484300666} | {'f1': 0.9038640595903165} |
| 0.0359 | 6.0 | 13704 | 0.5654 | {'accuracy': 0.9071448124828908} | {'recall': 0.8919781221513218} | {'precision': 0.9127798507462687} | {'f1': 0.9022591055786076} |
| 0.0262 | 7.0 | 15988 | 0.5568 | {'accuracy': 0.8994251300301123} | {'recall': 0.900865998176846} | {'precision': 0.8910176941282543} | {'f1': 0.8959147827072356} |
| 0.0181 | 8.0 | 18272 | 0.6846 | {'accuracy': 0.9042430878729811} | {'recall': 0.9026891522333638} | {'precision': 0.898491550413973} | {'f1': 0.9005854601261866} |
| 0.0121 | 9.0 | 20556 | 0.7516 | {'accuracy': 0.9071448124828908} | {'recall': 0.8990428441203282} | {'precision': 0.906896551724138} | {'f1': 0.9029526207370108} |
| 0.0119 | 10.0 | 22840 | 0.8614 | {'accuracy': 0.9050095811661648} | {'recall': 0.9002962625341842} | {'precision': 0.9018376897614427} | {'f1': 0.9010663169299197} |
| 0.0105 | 11.0 | 25124 | 0.7298 | {'accuracy': 0.9105940323022174} | {'recall': 0.8907247037374658} | {'precision': 0.9206218348839948} | {'f1': 0.9054265361672554} |
| 0.0049 | 12.0 | 27408 | 0.9237 | {'accuracy': 0.9101560361346839} | {'recall': 0.8828623518687329} | {'precision': 0.9266834110752302} | {'f1': 0.9042422827799498} |
| 0.0026 | 13.0 | 29692 | 0.9489 | {'accuracy': 0.9066520667944156} | {'recall': 0.8988149498632635} | {'precision': 0.9061458931648478} | {'f1': 0.9024655340083519} |
| 0.0016 | 14.0 | 31976 | 1.0045 | {'accuracy': 0.9099917875718587} | {'recall': 0.8963081130355515} | {'precision': 0.9146511627906977} | {'f1': 0.9053867403314917} |
| 0.0022 | 15.0 | 34260 | 1.0159 | {'accuracy': 0.9095537914043252} | {'recall': 0.8936873290793071} | {'precision': 0.916024293389395} | {'f1': 0.9047179605490829} |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ScottMueller/Cat_Dog_Breeds.ONNX
|
ScottMueller
| 2022-08-08T20:32:41Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2022-08-08T20:28:18Z |
---
license: mit
---
A simple single label classification model, ResNet18, to predict the cat or dog breed from the provided image. The model was created in Fast.ai
and exported to ONNX using PyTorch's ONNX export capabilities.
The source dataset is the OXFORD-IIIT PET. Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman and C. V. Jawahar
We have created a 37 category pet dataset with roughly 200 images for each class.
The images have a large variations in scale, pose and lighting. All images havean
associated ground truth annotation of breed, head ROI, and pixel level trimap segmentation.
The ONNX model can be used in other frameworks like Elixir's Axon. An example of converting the ONNX model into Axon can be found at:
https://github.com/elixir-nx/axon/tree/main/notebooks/onnx_to_axon.livemd.
|
ScottMueller/Cats_v_Dogs.ONNX
|
ScottMueller
| 2022-08-08T20:10:32Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2022-08-08T19:54:50Z |
---
license: mit
---
A simple single label classification model, ResNet18, to predict whether the provided image is a cat or a dog. The model was created in Fast.ai
and exported to ONNX using PyTorch's ONNX export capabilities.
The source dataset is the OXFORD-IIIT PET. Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman and C. V. Jawahar
We have created a 37 category pet dataset with roughly 200 images for each class.
The images have a large variations in scale, pose and lighting. All images havean
associated ground truth annotation of breed, head ROI, and pixel level trimap segmentation.
The ONNX model can be used in other frameworks like Elixir's Axon. An example of converting the ONNX model into Axon can be found at:
https://github.com/elixir-nx/axon/tree/main/notebooks/onnx_to_axon.livemd.
|
mlegls/usv3_usdc_predictor_0
|
mlegls
| 2022-08-08T18:43:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-08T18:40:15Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: usv3_usdc_predictor_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# usv3_usdc_predictor_0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
andres-hsn/testpyramidsrnd
|
andres-hsn
| 2022-08-08T18:39:55Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-08-08T18:39:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: andres-hsn/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
keljai/ppo-LunarLander-v2
|
keljai
| 2022-08-08T17:14:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T17:13:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 227.24 +/- 21.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sofiaoliveira/q-Taxi-v3
|
sofiaoliveira
| 2022-08-08T16:12:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T14:35:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sofiaoliveira/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Mozart-coder/DNA_bert_3-finetuned
|
Mozart-coder
| 2022-08-08T15:45:59Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-08T15:29:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: DNA_bert_3-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNA_bert_3-finetuned
This model is a fine-tuned version of [armheb/DNA_bert_3](https://huggingface.co/armheb/DNA_bert_3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8244 | 1.0 | 62 | 0.6044 |
| 0.5987 | 2.0 | 124 | 0.5933 |
| 0.5915 | 3.0 | 186 | 0.5856 |
| 0.585 | 4.0 | 248 | 0.5844 |
| 0.5817 | 5.0 | 310 | 0.5818 |
| 0.5791 | 6.0 | 372 | 0.5809 |
| 0.5801 | 7.0 | 434 | 0.5807 |
| 0.5768 | 8.0 | 496 | 0.5796 |
| 0.5741 | 9.0 | 558 | 0.5790 |
| 0.574 | 10.0 | 620 | 0.5788 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
LilOpa/LunarLanderPPO
|
LilOpa
| 2022-08-08T15:15:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T15:13:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 116.10 +/- 113.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
neskue/ppo-LunarLander-v2
|
neskue
| 2022-08-08T14:57:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T14:56:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 27.28 +/- 144.51
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sofiaoliveira/q-FrozenLake-v1-4x4-noSlippery
|
sofiaoliveira
| 2022-08-08T13:54:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T13:49:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sofiaoliveira/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm500_aug1
|
dminiotas05
| 2022-08-08T13:27:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-08T11:37:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft1500_norm500_aug1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm500_aug1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9086
- Mse: 3.6357
- Mae: 1.0762
- R2: 0.2894
- Accuracy: 0.5170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:|
| 1.5856 | 1.0 | 5847 | 3.3101 | 4.1376 | 1.1447 | 0.1913 | 0.4965 |
| 0.442 | 2.0 | 11694 | 2.7448 | 3.4311 | 1.0934 | 0.3294 | 0.4523 |
| 0.2703 | 3.0 | 17541 | 2.9300 | 3.6625 | 1.0907 | 0.2841 | 0.4933 |
| 0.1699 | 4.0 | 23388 | 2.7979 | 3.4973 | 1.0808 | 0.3164 | 0.4805 |
| 0.1168 | 5.0 | 29235 | 2.9086 | 3.6357 | 1.0762 | 0.2894 | 0.5170 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipistil
|
paola-md
| 2022-08-08T12:02:00Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-08T06:51:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipistil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipistil
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9743
- Rmse: 1.4051
- Mse: 1.9743
- Mae: 1.0578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|
| 1.9657 | 1.0 | 8126 | 1.9789 | 1.4067 | 1.9789 | 1.0578 |
| 1.9617 | 2.0 | 16252 | 1.9873 | 1.4097 | 1.9873 | 1.0620 |
| 1.9588 | 3.0 | 24378 | 1.9769 | 1.4060 | 1.9769 | 1.0578 |
| 1.958 | 4.0 | 32504 | 1.9736 | 1.4048 | 1.9736 | 1.0578 |
| 1.9568 | 5.0 | 40630 | 1.9772 | 1.4061 | 1.9772 | 1.0578 |
| 1.9564 | 6.0 | 48756 | 1.9736 | 1.4048 | 1.9736 | 1.0578 |
| 1.9563 | 7.0 | 56882 | 1.9737 | 1.4049 | 1.9737 | 1.0578 |
| 1.9561 | 8.0 | 65008 | 1.9737 | 1.4049 | 1.9737 | 1.0578 |
| 1.9559 | 9.0 | 73134 | 1.9743 | 1.4051 | 1.9743 | 1.0578 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdulmatinomotoso/multi_news_article_title_25000_2
|
abdulmatinomotoso
| 2022-08-08T11:41:24Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T05:04:15Z |
---
tags:
- generated_from_trainer
model-index:
- name: multi_news_article_title_25000_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi_news_article_title_25000_2
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3431 | 0.32 | 500 | 0.2731 |
| 0.2136 | 0.64 | 1000 | 0.2028 |
| 0.215 | 0.96 | 1500 | 0.1880 |
| 0.1972 | 1.28 | 2000 | 0.1809 |
| 0.1903 | 1.6 | 2500 | 0.1760 |
| 0.1886 | 1.92 | 3000 | 0.1740 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Lvxue/distilled-mt5-small-0.5
|
Lvxue
| 2022-08-08T10:41:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-08T08:12:07Z |
---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-0.5
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 1.2575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-0.5
This model is a distilled version of [Lvxue/finetuned-mt5-base](https://huggingface.co/Lvxue/finetuned-mt5-base) on [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7455
- Bleu: 1.2575
- Gen Len: 94.3597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mohammadhadiarabi/ddpm-butterflies-128
|
mohammadhadiarabi
| 2022-08-08T10:35:57Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-08T09:22:10Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mohammadhadiarabi/ddpm-butterflies-128/tensorboard?#scalars)
|
osanseviero/distilroberta-base-sentence-transformer
|
osanseviero
| 2022-08-08T09:33:52Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:embedding-data/QQP_triplets",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-08-08T09:33:42Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- embedding-data/QQP_triplets
---
# osanseviero/distilroberta-base-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('osanseviero/distilroberta-base-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('osanseviero/distilroberta-base-sentence-transformer')
model = AutoModel.from_pretrained('osanseviero/distilroberta-base-sentence-transformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=osanseviero/distilroberta-base-sentence-transformer)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 63 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 63,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Mahmoud7/q-Taxi-v3
|
Mahmoud7
| 2022-08-08T09:21:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-08T09:21:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mahmoud7/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless5_offline
|
luomingshuang
| 2022-08-08T08:45:35Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-07-26T07:30:53Z |
See https://github.com/k2-fsa/icefall/pull/447 .
|
eliwill/distilgpt2-discursive-krishna
|
eliwill
| 2022-08-08T07:56:32Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-08T07:49:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: eliwill/distilgpt2-discursive-krishna
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eliwill/distilgpt2-discursive-krishna
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2503
- Validation Loss: 3.1371
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2503 | 3.1371 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
202015004/Spoof_detection
|
202015004
| 2022-08-08T07:48:41Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-05T09:32:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Spoof_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Spoof_detection
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7448
- Wer: 0.1090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 95.9046 | 0.66 | 500 | 992.2993 | 0.6180 |
| 14.0322 | 1.33 | 1000 | 1.8873 | 0.1090 |
| 1.8659 | 1.99 | 1500 | 1.7827 | 0.1090 |
| 1.851 | 2.65 | 2000 | 1.8489 | 0.1090 |
| 1.8218 | 3.32 | 2500 | 1.8943 | 0.1090 |
| 1.8108 | 3.98 | 3000 | 1.9250 | 0.1090 |
| 1.8228 | 4.64 | 3500 | 1.7555 | 0.1090 |
| 1.832 | 5.31 | 4000 | 1.7837 | 0.1090 |
| 1.8403 | 5.97 | 4500 | 1.6644 | 0.1090 |
| 1.8292 | 6.63 | 5000 | 1.6906 | 0.1090 |
| 1.8223 | 7.29 | 5500 | 1.6966 | 0.1090 |
| 1.8007 | 7.96 | 6000 | 1.6951 | 0.1090 |
| 1.7986 | 8.62 | 6500 | 1.7436 | 0.1090 |
| 1.7933 | 9.28 | 7000 | 1.8169 | 0.1090 |
| 1.7861 | 9.95 | 7500 | 1.7209 | 0.1090 |
| 1.7843 | 10.61 | 8000 | 1.9379 | 0.1090 |
| 1.7743 | 11.27 | 8500 | 1.9834 | 0.1090 |
| 1.7721 | 11.94 | 9000 | 1.9279 | 0.1090 |
| 1.7719 | 12.6 | 9500 | 1.8187 | 0.1090 |
| 1.7616 | 13.26 | 10000 | 1.7804 | 0.1090 |
| 1.7638 | 13.93 | 10500 | 1.7884 | 0.1090 |
| 1.7651 | 14.59 | 11000 | 1.7476 | 0.1090 |
| 1.7603 | 15.25 | 11500 | 1.7570 | 0.1090 |
| 1.7543 | 15.92 | 12000 | 1.7356 | 0.1090 |
| 1.7556 | 16.58 | 12500 | 1.7140 | 0.1090 |
| 1.751 | 17.24 | 13000 | 1.7453 | 0.1090 |
| 1.75 | 17.9 | 13500 | 1.7648 | 0.1090 |
| 1.7492 | 18.57 | 14000 | 1.7338 | 0.1090 |
| 1.7484 | 19.23 | 14500 | 1.7093 | 0.1090 |
| 1.7461 | 19.89 | 15000 | 1.7393 | 0.1090 |
| 1.7429 | 20.56 | 15500 | 1.7605 | 0.1090 |
| 1.7446 | 21.22 | 16000 | 1.7782 | 0.1090 |
| 1.7435 | 21.88 | 16500 | 1.6749 | 0.1090 |
| 1.7392 | 22.55 | 17000 | 1.7468 | 0.1090 |
| 1.741 | 23.21 | 17500 | 1.7406 | 0.1090 |
| 1.7394 | 23.87 | 18000 | 1.7787 | 0.1090 |
| 1.739 | 24.54 | 18500 | 1.7969 | 0.1090 |
| 1.7341 | 25.2 | 19000 | 1.7490 | 0.1090 |
| 1.7371 | 25.86 | 19500 | 1.7783 | 0.1090 |
| 1.735 | 26.53 | 20000 | 1.7540 | 0.1090 |
| 1.7353 | 27.19 | 20500 | 1.7735 | 0.1090 |
| 1.7331 | 27.85 | 21000 | 1.7188 | 0.1090 |
| 1.7308 | 28.51 | 21500 | 1.7349 | 0.1090 |
| 1.7341 | 29.18 | 22000 | 1.7531 | 0.1090 |
| 1.7305 | 29.84 | 22500 | 1.7448 | 0.1090 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.12.1
|
facebook/xlm-roberta-xxl
|
facebook
| 2022-08-08T07:19:25Z | 20,264 | 15 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta-xl",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2105.00572",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
---
# XLM-RoBERTa-XL (xxlarge-sized model)
XLM-RoBERTa-XL model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa-XL did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa-XL is a extra large multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa-XL model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta-xl) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='facebook/xlm-roberta-xxl')
>>> unmasker("Europe is a <mask> continent.")
[{'score': 0.22996895015239716,
'token': 28811,
'token_str': 'European',
'sequence': 'Europe is a European continent.'},
{'score': 0.14307449758052826,
'token': 21334,
'token_str': 'large',
'sequence': 'Europe is a large continent.'},
{'score': 0.12239163368940353,
'token': 19336,
'token_str': 'small',
'sequence': 'Europe is a small continent.'},
{'score': 0.07025063782930374,
'token': 18410,
'token_str': 'vast',
'sequence': 'Europe is a vast continent.'},
{'score': 0.032869212329387665,
'token': 6957,
'token_str': 'big',
'sequence': 'Europe is a big continent.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('facebook/xlm-roberta-xxl')
model = AutoModelForMaskedLM.from_pretrained("facebook/xlm-roberta-xxl")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-00572,
author = {Naman Goyal and
Jingfei Du and
Myle Ott and
Giri Anantharaman and
Alexis Conneau},
title = {Larger-Scale Transformers for Multilingual Masked Language Modeling},
journal = {CoRR},
volume = {abs/2105.00572},
year = {2021},
url = {https://arxiv.org/abs/2105.00572},
eprinttype = {arXiv},
eprint = {2105.00572},
timestamp = {Wed, 12 May 2021 15:54:31 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-00572.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
osanseviero/osans
|
osanseviero
| 2022-08-08T07:05:08Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-08-08T07:04:37Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|Adam|0.001|0.0|0.9|0.999|1e-07|False|float32|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
ultra-coder54732/comment-detection-prop-16
|
ultra-coder54732
| 2022-08-08T00:29:14Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-07T05:37:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: comment-detection-prop-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# comment-detection-prop-16
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
srcocotero/bert-qa-es
|
srcocotero
| 2022-08-07T21:19:06Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_es",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-07T18:16:10Z |
---
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: bert-qa-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-qa-es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the squad_es dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/RELEXset-MLM
|
paola-md
| 2022-08-07T20:07:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-07T13:14:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-distilroberta-Is
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-distilroberta-Is
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 19.6191 | 1.0 | 2135 | 10.5217 |
| 8.6838 | 2.0 | 4270 | 7.3017 |
| 6.884 | 3.0 | 6405 | 6.4445 |
| 6.2953 | 4.0 | 8540 | 6.0610 |
| 6.0205 | 5.0 | 10675 | 5.9047 |
| 5.851 | 6.0 | 12810 | 5.7790 |
| 5.7464 | 7.0 | 14945 | 5.7164 |
| 5.6684 | 8.0 | 17080 | 5.6415 |
| 5.6138 | 9.0 | 19215 | 5.5671 |
| 5.5638 | 10.0 | 21350 | 5.5360 |
| 5.5288 | 11.0 | 23485 | 5.5069 |
| 5.4968 | 12.0 | 25620 | 5.4968 |
| 5.4696 | 13.0 | 27755 | 5.4539 |
| 5.4468 | 14.0 | 29890 | 5.4416 |
| 5.4177 | 15.0 | 32025 | 5.3722 |
| 5.3717 | 16.0 | 34160 | 5.3226 |
| 5.317 | 17.0 | 36295 | 5.2197 |
| 5.2367 | 18.0 | 38430 | 5.0888 |
| 5.1543 | 19.0 | 40565 | 4.9954 |
| 5.0919 | 20.0 | 42700 | 4.9306 |
| 5.038 | 21.0 | 44835 | 4.8657 |
| 4.9983 | 22.0 | 46970 | 4.8137 |
| 4.9639 | 23.0 | 49105 | 4.7704 |
| 4.9426 | 24.0 | 51240 | 4.7486 |
| 4.9312 | 25.0 | 53375 | 4.7427 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Izarel/bert-base-uncased_title_fine_tuned
|
Izarel
| 2022-08-07T20:05:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-07T16:18:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: bert-base-uncased_title_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_title_fine_tuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3368
- Accuracy: {'accuracy': 0.8810840405146455}
- Recall: {'recall': 0.8611674554879423}
- Precision: {'precision': 0.890468422279189}
- F1: {'f1': 0.8755728689275893}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:|
| 0.3224 | 1.0 | 3045 | 0.3079 | {'accuracy': 0.8730358609362168} | {'recall': 0.8139508677034032} | {'precision': 0.915346597389431} | {'f1': 0.861676110945422} |
| 0.2818 | 2.0 | 6090 | 0.3153 | {'accuracy': 0.8814672871612373} | {'recall': 0.8299526707234618} | {'precision': 0.9182146864480738} | {'f1': 0.8718555785735426} |
| 0.2394 | 3.0 | 9135 | 0.3104 | {'accuracy': 0.8830002737476047} | {'recall': 0.8548568852828488} | {'precision': 0.8993479549496147} | {'f1': 0.8765382171124848} |
| 0.204 | 4.0 | 12180 | 0.3368 | {'accuracy': 0.8810840405146455} | {'recall': 0.8611674554879423} | {'precision': 0.890468422279189} | {'f1': 0.8755728689275893} |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ycchen/TrOCR-base-ver021-v1
|
ycchen
| 2022-08-07T19:23:45Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-08-07T18:49:55Z |
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database (actually this model is meant to be used on printed text)
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('ycchen/TrOCR-base-ver021-v1')
model = VisionEncoderDecoderModel.from_pretrained('ycchen/TrOCR-base-ver021-v1')
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
|
cataluna84/xlm-roberta-base-finetuned-panx-en
|
cataluna84
| 2022-08-07T18:15:27Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-07T17:56:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cataluna84/xlm-roberta-base-finetuned-panx-it
|
cataluna84
| 2022-08-07T17:56:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-07T17:37:54Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.