modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 00:37:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 00:35:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huggingtweets/big___oven-y2kenlee
|
huggingtweets
| 2022-10-31T20:34:10Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T20:33:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/big___oven-y2kenlee/1667248445882/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1561191989282045954/C23ktyyF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & supremejesuslover</div>
<div style="text-align: center; font-size: 14px;">@big___oven-y2kenlee</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & supremejesuslover.
| Data | oskcar | supremejesuslover |
| --- | --- | --- |
| Tweets downloaded | 2705 | 3192 |
| Retweets | 615 | 188 |
| Short tweets | 328 | 412 |
| Tweets kept | 1762 | 2592 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1n978eau/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-y2kenlee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2690qdqu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2690qdqu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-y2kenlee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-model2-3110-e4
|
theojolliffe
| 2022-10-31T20:28:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-31T19:19:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-model2-3110-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-model2-3110-e4
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Rouge1: 70.0692
- Rouge2: 68.1457
- Rougel: 69.8943
- Rougelsum: 70.0389
- Gen Len: 19.8966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5951 | 1.0 | 553 | 0.3089 | 62.5675 | 54.7411 | 61.2646 | 61.3675 | 19.7241 |
| 0.2541 | 2.0 | 1106 | 0.1432 | 66.113 | 61.964 | 64.6141 | 64.9187 | 19.8966 |
| 0.1547 | 3.0 | 1659 | 0.0964 | 68.6902 | 64.938 | 67.6197 | 67.9181 | 19.8966 |
| 0.1141 | 4.0 | 2212 | 0.1015 | 68.9122 | 66.4279 | 68.4906 | 68.5758 | 19.8966 |
| 0.0728 | 5.0 | 2765 | 0.0819 | 69.2271 | 66.8276 | 68.6915 | 68.849 | 19.8966 |
| 0.0563 | 6.0 | 3318 | 0.0700 | 70.0692 | 68.1457 | 69.8943 | 70.0389 | 19.8966 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
theojolliffe/T5-model-1-feedback-3110
|
theojolliffe
| 2022-10-31T20:00:50Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-31T19:04:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-model-1-feedback-3110
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-feedback-3110
This model is a fine-tuned version of [theojolliffe/T5-model-1-feedback-1109](https://huggingface.co/theojolliffe/T5-model-1-feedback-1109) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1605
- Rouge1: 91.3604
- Rouge2: 86.1024
- Rougel: 90.6798
- Rougelsum: 90.7011
- Gen Len: 15.7167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2711 | 1.0 | 2279 | 0.2176 | 90.3305 | 83.9311 | 89.4476 | 89.4573 | 15.7 |
| 0.1709 | 2.0 | 4558 | 0.1759 | 91.3226 | 85.9979 | 90.7558 | 90.7395 | 15.5667 |
| 0.1644 | 3.0 | 6837 | 0.1641 | 91.8385 | 86.7529 | 91.1621 | 91.1492 | 15.6792 |
| 0.1606 | 4.0 | 9116 | 0.1605 | 91.3604 | 86.1024 | 90.6798 | 90.7011 | 15.7167 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/big___oven-mobydickatsea
|
huggingtweets
| 2022-10-31T19:47:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T19:05:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/big___oven-mobydickatsea/1667245659923/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3755781902/9b47b7e223799bb523c7628e00b411c4_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & Moby Dick</div>
<div style="text-align: center; font-size: 14px;">@big___oven-mobydickatsea</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & Moby Dick.
| Data | oskcar | Moby Dick |
| --- | --- | --- |
| Tweets downloaded | 2685 | 3250 |
| Retweets | 610 | 0 |
| Short tweets | 328 | 44 |
| Tweets kept | 1747 | 3206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ua4fxrap/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-mobydickatsea's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10tvpbjp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10tvpbjp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-mobydickatsea')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pig4431/SST2_ELECTRA_5E
|
pig4431
| 2022-10-31T18:53:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T18:52:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SST2_ELECTRA_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST2_ELECTRA_5E
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3431
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.667 | 0.12 | 50 | 0.5772 | 0.8533 |
| 0.4746 | 0.23 | 100 | 0.3421 | 0.9 |
| 0.3104 | 0.35 | 150 | 0.2948 | 0.9 |
| 0.2315 | 0.46 | 200 | 0.3269 | 0.8867 |
| 0.2675 | 0.58 | 250 | 0.2604 | 0.92 |
| 0.2467 | 0.69 | 300 | 0.2321 | 0.92 |
| 0.2013 | 0.81 | 350 | 0.2959 | 0.92 |
| 0.2165 | 0.92 | 400 | 0.2219 | 0.92 |
| 0.2524 | 1.04 | 450 | 0.2649 | 0.9133 |
| 0.1396 | 1.15 | 500 | 0.2985 | 0.9133 |
| 0.152 | 1.27 | 550 | 0.2766 | 0.9267 |
| 0.126 | 1.39 | 600 | 0.2657 | 0.9267 |
| 0.1545 | 1.5 | 650 | 0.2568 | 0.92 |
| 0.184 | 1.62 | 700 | 0.2916 | 0.92 |
| 0.198 | 1.73 | 750 | 0.2564 | 0.9267 |
| 0.1432 | 1.85 | 800 | 0.2669 | 0.9267 |
| 0.1405 | 1.96 | 850 | 0.2466 | 0.9333 |
| 0.0969 | 2.08 | 900 | 0.2213 | 0.9467 |
| 0.1055 | 2.19 | 950 | 0.2733 | 0.9333 |
| 0.0895 | 2.31 | 1000 | 0.3237 | 0.9333 |
| 0.118 | 2.42 | 1050 | 0.3666 | 0.9133 |
| 0.0775 | 2.54 | 1100 | 0.2783 | 0.94 |
| 0.1145 | 2.66 | 1150 | 0.2550 | 0.9267 |
| 0.1214 | 2.77 | 1200 | 0.2777 | 0.9267 |
| 0.1288 | 2.89 | 1250 | 0.2861 | 0.9267 |
| 0.076 | 3.0 | 1300 | 0.3194 | 0.9267 |
| 0.0865 | 3.12 | 1350 | 0.3391 | 0.9267 |
| 0.0626 | 3.23 | 1400 | 0.3133 | 0.9267 |
| 0.0657 | 3.35 | 1450 | 0.3322 | 0.9267 |
| 0.0858 | 3.46 | 1500 | 0.2799 | 0.94 |
| 0.0823 | 3.58 | 1550 | 0.2731 | 0.94 |
| 0.0739 | 3.7 | 1600 | 0.2822 | 0.9333 |
| 0.0911 | 3.81 | 1650 | 0.3264 | 0.9267 |
| 0.0808 | 3.93 | 1700 | 0.2388 | 0.9467 |
| 0.0509 | 4.04 | 1750 | 0.2740 | 0.94 |
| 0.0512 | 4.16 | 1800 | 0.3326 | 0.9267 |
| 0.0397 | 4.27 | 1850 | 0.3061 | 0.9333 |
| 0.0565 | 4.39 | 1900 | 0.2891 | 0.9333 |
| 0.0353 | 4.5 | 1950 | 0.3203 | 0.9333 |
| 0.0455 | 4.62 | 2000 | 0.3113 | 0.9333 |
| 0.0494 | 4.73 | 2050 | 0.3403 | 0.9267 |
| 0.0306 | 4.85 | 2100 | 0.3467 | 0.9267 |
| 0.0655 | 4.97 | 2150 | 0.3431 | 0.9267 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Deliz23/Hi.new.here
|
Deliz23
| 2022-10-31T18:45:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-31T18:45:10Z |
---
license: creativeml-openrail-m
---
|
Devarshi/Brain_Tumor_Classification
|
Devarshi
| 2022-10-31T18:39:13Z | 294 | 9 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-31T13:35:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: Brain_Tumor_Classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9646761984861227
- name: F1
type: f1
value: 0.9646761984861227
- name: Recall
type: recall
value: 0.9646761984861227
- name: Precision
type: precision
value: 0.9646761984861227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain_Tumor_Classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1012
- Accuracy: 0.9647
- F1: 0.9647
- Recall: 0.9647
- Precision: 0.9647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.4856 | 0.99 | 83 | 0.3771 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.3495 | 1.99 | 166 | 0.2608 | 0.8949 | 0.8949 | 0.8949 | 0.8949 |
| 0.252 | 2.99 | 249 | 0.1445 | 0.9487 | 0.9487 | 0.9487 | 0.9487 |
| 0.2364 | 3.99 | 332 | 0.1029 | 0.9588 | 0.9588 | 0.9588 | 0.9588 |
| 0.2178 | 4.99 | 415 | 0.1012 | 0.9647 | 0.9647 | 0.9647 | 0.9647 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/SST2_ALBERT_5E
|
pig4431
| 2022-10-31T18:36:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T18:35:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SST2_ALBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST2_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5698
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5574 | 0.12 | 50 | 0.4424 | 0.8 |
| 0.4078 | 0.23 | 100 | 0.3995 | 0.8533 |
| 0.3594 | 0.35 | 150 | 0.3805 | 0.8533 |
| 0.2952 | 0.46 | 200 | 0.3388 | 0.8933 |
| 0.3157 | 0.58 | 250 | 0.3629 | 0.8733 |
| 0.2623 | 0.69 | 300 | 0.5120 | 0.8667 |
| 0.261 | 0.81 | 350 | 0.2851 | 0.8867 |
| 0.3071 | 0.92 | 400 | 0.2754 | 0.8733 |
| 0.2905 | 1.04 | 450 | 0.3013 | 0.8867 |
| 0.2114 | 1.15 | 500 | 0.3205 | 0.9 |
| 0.2537 | 1.27 | 550 | 0.4157 | 0.8867 |
| 0.2106 | 1.39 | 600 | 0.5170 | 0.86 |
| 0.2227 | 1.5 | 650 | 0.3422 | 0.9 |
| 0.2304 | 1.62 | 700 | 0.5696 | 0.8533 |
| 0.2661 | 1.73 | 750 | 0.2975 | 0.9133 |
| 0.235 | 1.85 | 800 | 0.2692 | 0.92 |
| 0.2182 | 1.96 | 850 | 0.3247 | 0.9067 |
| 0.1762 | 2.08 | 900 | 0.3693 | 0.9133 |
| 0.2086 | 2.19 | 950 | 0.4465 | 0.8933 |
| 0.1444 | 2.31 | 1000 | 0.4225 | 0.9 |
| 0.2228 | 2.42 | 1050 | 0.3794 | 0.9067 |
| 0.1634 | 2.54 | 1100 | 0.4783 | 0.8933 |
| 0.1561 | 2.66 | 1150 | 0.3476 | 0.9267 |
| 0.1286 | 2.77 | 1200 | 0.5080 | 0.8933 |
| 0.1647 | 2.89 | 1250 | 0.4369 | 0.9067 |
| 0.1059 | 3.0 | 1300 | 0.4132 | 0.9133 |
| 0.1069 | 3.12 | 1350 | 0.6070 | 0.8733 |
| 0.108 | 3.23 | 1400 | 0.4909 | 0.9 |
| 0.0741 | 3.35 | 1450 | 0.5231 | 0.9 |
| 0.1204 | 3.46 | 1500 | 0.4517 | 0.9067 |
| 0.106 | 3.58 | 1550 | 0.4685 | 0.8933 |
| 0.1375 | 3.7 | 1600 | 0.4597 | 0.9067 |
| 0.0727 | 3.81 | 1650 | 0.4443 | 0.9 |
| 0.0669 | 3.93 | 1700 | 0.4324 | 0.9067 |
| 0.081 | 4.04 | 1750 | 0.4176 | 0.9133 |
| 0.0462 | 4.16 | 1800 | 0.4626 | 0.9133 |
| 0.0382 | 4.27 | 1850 | 0.4732 | 0.9067 |
| 0.0948 | 4.39 | 1900 | 0.5471 | 0.9 |
| 0.0667 | 4.5 | 1950 | 0.5581 | 0.8867 |
| 0.0878 | 4.62 | 2000 | 0.5429 | 0.8933 |
| 0.0651 | 4.73 | 2050 | 0.5852 | 0.8933 |
| 0.0492 | 4.85 | 2100 | 0.5793 | 0.8933 |
| 0.0496 | 4.97 | 2150 | 0.5698 | 0.8933 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/big___oven-heart2starr
|
huggingtweets
| 2022-10-31T18:33:05Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T18:32:57Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1586853654836936707/0FD-sivp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & ⋆。°✩</div>
<div style="text-align: center; font-size: 14px;">@big___oven-heart2starr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & ⋆。°✩.
| Data | oskcar | ⋆。°✩ |
| --- | --- | --- |
| Tweets downloaded | 2685 | 3126 |
| Retweets | 610 | 50 |
| Short tweets | 328 | 1162 |
| Tweets kept | 1747 | 1914 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2pq67quh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-heart2starr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gzxt770) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gzxt770/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-heart2starr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pig4431/SST2_XLNET_5E
|
pig4431
| 2022-10-31T18:17:51Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T18:17:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SST2_XLNet_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST2_XLNet_5E
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5502
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6038 | 0.12 | 50 | 0.2830 | 0.8933 |
| 0.3903 | 0.23 | 100 | 0.3346 | 0.9 |
| 0.3476 | 0.35 | 150 | 0.4187 | 0.8533 |
| 0.3528 | 0.46 | 200 | 0.3177 | 0.9 |
| 0.3372 | 0.58 | 250 | 0.4171 | 0.8333 |
| 0.3106 | 0.69 | 300 | 0.2825 | 0.9 |
| 0.295 | 0.81 | 350 | 0.3152 | 0.9 |
| 0.2828 | 0.92 | 400 | 0.4360 | 0.88 |
| 0.2359 | 1.04 | 450 | 0.3971 | 0.9 |
| 0.2224 | 1.15 | 500 | 0.3380 | 0.88 |
| 0.2136 | 1.27 | 550 | 0.3889 | 0.8933 |
| 0.264 | 1.39 | 600 | 0.4182 | 0.8667 |
| 0.1864 | 1.5 | 650 | 0.4887 | 0.88 |
| 0.1817 | 1.62 | 700 | 0.3626 | 0.9133 |
| 0.2021 | 1.73 | 750 | 0.4481 | 0.8933 |
| 0.2154 | 1.85 | 800 | 0.3702 | 0.8933 |
| 0.2392 | 1.96 | 850 | 0.5025 | 0.8933 |
| 0.1496 | 2.08 | 900 | 0.4606 | 0.9133 |
| 0.1537 | 2.19 | 950 | 0.5008 | 0.8933 |
| 0.1015 | 2.31 | 1000 | 0.5612 | 0.9067 |
| 0.0915 | 2.42 | 1050 | 0.5249 | 0.8933 |
| 0.1239 | 2.54 | 1100 | 0.4234 | 0.9133 |
| 0.1135 | 2.66 | 1150 | 0.4910 | 0.9067 |
| 0.1738 | 2.77 | 1200 | 0.3844 | 0.92 |
| 0.1428 | 2.89 | 1250 | 0.4282 | 0.92 |
| 0.1282 | 3.0 | 1300 | 0.4320 | 0.9 |
| 0.059 | 3.12 | 1350 | 0.4957 | 0.9133 |
| 0.0517 | 3.23 | 1400 | 0.4927 | 0.92 |
| 0.0853 | 3.35 | 1450 | 0.4187 | 0.92 |
| 0.0808 | 3.46 | 1500 | 0.4304 | 0.92 |
| 0.09 | 3.58 | 1550 | 0.3447 | 0.9267 |
| 0.044 | 3.7 | 1600 | 0.4994 | 0.9067 |
| 0.0443 | 3.81 | 1650 | 0.4516 | 0.9133 |
| 0.0974 | 3.93 | 1700 | 0.4172 | 0.92 |
| 0.0768 | 4.04 | 1750 | 0.4777 | 0.9133 |
| 0.0418 | 4.16 | 1800 | 0.4924 | 0.9267 |
| 0.0237 | 4.27 | 1850 | 0.5254 | 0.92 |
| 0.0426 | 4.39 | 1900 | 0.5532 | 0.9133 |
| 0.0336 | 4.5 | 1950 | 0.5838 | 0.9067 |
| 0.0188 | 4.62 | 2000 | 0.5775 | 0.9067 |
| 0.0318 | 4.73 | 2050 | 0.5781 | 0.9067 |
| 0.0348 | 4.85 | 2100 | 0.5526 | 0.9133 |
| 0.0524 | 4.97 | 2150 | 0.5502 | 0.9133 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/_is_is_are-big___oven
|
huggingtweets
| 2022-10-31T16:55:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T16:55:07Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1545506603453091842/4R_oCo_q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & ANGELICISM01 滲み出るエロス</div>
<div style="text-align: center; font-size: 14px;">@_is_is_are-big___oven</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & ANGELICISM01 滲み出るエロス.
| Data | oskcar | ANGELICISM01 滲み出るエロス |
| --- | --- | --- |
| Tweets downloaded | 2682 | 282 |
| Retweets | 609 | 49 |
| Short tweets | 328 | 47 |
| Tweets kept | 1745 | 186 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28mac9kd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_is_is_are-big___oven's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vchpo0m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vchpo0m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_is_is_are-big___oven')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/news_mbs
|
huggingtweets
| 2022-10-31T16:45:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T16:44:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/news_mbs/1667234715120/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1156239851106127872/cr7YxvqC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MBS News</div>
<div style="text-align: center; font-size: 14px;">@news_mbs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MBS News.
| Data | MBS News |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 435 |
| Short tweets | 36 |
| Tweets kept | 2729 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/msqcd30f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @news_mbs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1p7vvik4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1p7vvik4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/news_mbs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
db0/tron-legacy-copy
|
db0
| 2022-10-31T16:04:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-31T15:37:53Z |
---
license: creativeml-openrail-m
---
Publicly available copy of this model:https://huggingface.co/dallinmackay/Tron-Legacy-diffusion as allowed by the CreativeML-Openrails-M license
I am doing this to allow this model to be downloaded automatically as the authentication mechanism is breaking python.
|
db0/microworlds
|
db0
| 2022-10-31T16:02:58Z | 0 | 7 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-10-31T12:17:53Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
This is a fork of the Microworlds model provided in [Public Prompts](https://publicprompts.art/microworlds-dreambooth-model/) for easier download and integration into services as allowed by the CreativeML-Openrails-M license
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
DingYao/autotrain-fbert-singlish-5-1943965533
|
DingYao
| 2022-10-31T16:01:02Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:DingYao/autotrain-data-fbert-singlish-5",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T15:59:27Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- DingYao/autotrain-data-fbert-singlish-5
co2_eq_emissions:
emissions: 2.1095744631067883
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1943965533
- CO2 Emissions (in grams): 2.1096
## Validation Metrics
- Loss: 0.310
- Accuracy: 0.880
- Macro F1: 0.766
- Micro F1: 0.880
- Weighted F1: 0.877
- Macro Precision: 0.826
- Micro Precision: 0.880
- Weighted Precision: 0.877
- Macro Recall: 0.735
- Micro Recall: 0.880
- Weighted Recall: 0.880
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/DingYao/autotrain-fbert-singlish-5-1943965533
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("DingYao/autotrain-fbert-singlish-5-1943965533", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("DingYao/autotrain-fbert-singlish-5-1943965533", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
yunsizhang/distilbert-base-uncased-finetuned-emotion
|
yunsizhang
| 2022-10-31T15:41:40Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259345317772325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2292
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8732 | 1.0 | 250 | 0.3363 | 0.903 | 0.9002 |
| 0.2645 | 2.0 | 500 | 0.2292 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SiddharthaM/twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-v1
|
SiddharthaM
| 2022-10-31T15:32:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T15:09:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2426
- Accuracy: 0.6492
- Precision: 0.6498
- Recall: 0.6492
- F1: 0.6492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8263 | 1.0 | 663 | 0.7315 | 0.6536 | 0.6699 | 0.6536 | 0.6502 |
| 0.6658 | 2.0 | 1326 | 0.7801 | 0.6565 | 0.6613 | 0.6565 | 0.6560 |
| 0.5735 | 3.0 | 1989 | 0.8170 | 0.6514 | 0.6579 | 0.6514 | 0.6504 |
| 0.344 | 4.0 | 2652 | 1.0184 | 0.6512 | 0.6525 | 0.6512 | 0.6512 |
| 0.2671 | 5.0 | 3315 | 1.1672 | 0.6503 | 0.6519 | 0.6503 | 0.6504 |
| 0.2236 | 6.0 | 3978 | 1.2426 | 0.6492 | 0.6498 | 0.6492 | 0.6492 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
heooo/KorTextSummarization
|
heooo
| 2022-10-31T15:18:53Z | 0 | 0 | null |
[
"bart",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-10-19T14:50:24Z |
---
language: ko
tags:
- bart
license: apache-2.0
---
Copyright (c) SKT and its affiliates and Kakao Brain.
|
taln-ls2n/kpbiomed-models
|
taln-ls2n
| 2022-10-31T14:42:06Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-31T09:34:29Z |
---
license: mit
---
# BART models fine tuned for keyphrase generation
## About
This repository contains 5 models that were trained and evaluated on the three datasets KPBiomed, KP20k and KPTimes.
Details about the models and the KPBiomed dataset can be found in the original paper: Maël Houbre, Florian Boudin and Béatrice Daille. 2022. A Large-Scale Dataset for Biomedical Keyphrase Generation. In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
## How to use
As this repository contains several models, using the huggingface API directly will not work.
To use one of the 4 models, you need to first download the desired zip file and unzip it.
For example, if we take the biobart-medium model and unzip it in our source directory. We will be able to load the model with the API as below.
from transformers import BartTokenizerFast, BartForConditionalGeneration
tokenizer = BartTokenizerFast.from_pretrained("biobart-medium")
model = BartForConditionalGeneration.from_pretrained("biobart-medium")
model.to("cuda")
We will then be able to generate keyphrases with the model using Hugging Face's generate function
inputs = tokenizer(input_text, padding="max_length", max_length= 512, truncation=True, return_tensors='pt')
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
outputs = model.generate(inputs=input_ids,attention_mask=attention_mask,
num_beams=20,
num_return_sequences=20
)
keyphrase_sequence = tokenizer.batch_decode(outputs,skip_special_tokens=False)
|
Ezre/bert-base-finetuned-sts
|
Ezre
| 2022-10-31T14:17:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-16T10:25:28Z |
---
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: bert-base-finetuned-sts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4835
- eval_pearsonr: 0.8970
- eval_runtime: 3.7199
- eval_samples_per_second: 139.521
- eval_steps_per_second: 4.57
- epoch: 5.0
- step: 1825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sohamtiwari3120/deberta-v3-base-finetuned-ner
|
sohamtiwari3120
| 2022-10-31T14:16:33Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-24T12:43:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: deberta-v3-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-ner
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7679
- Overall Precision: 0.4915
- Overall Recall: 0.6463
- Overall F1: 0.5584
- Overall Accuracy: 0.9555
- Datasetname F1: 0.3304
- Hyperparametername F1: 0.6341
- Hyperparametervalue F1: 0.7463
- Methodname F1: 0.6093
- Metricname F1: 0.7089
- Metricvalue F1: 0.7500
- Taskname F1: 0.4426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Datasetname F1 | Hyperparametername F1 | Hyperparametervalue F1 | Methodname F1 | Metricname F1 | Metricvalue F1 | Taskname F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:---------------------:|:----------------------:|:-------------:|:-------------:|:--------------:|:-----------:|
| No log | 1.0 | 132 | 0.5046 | 0.2771 | 0.5041 | 0.3576 | 0.9356 | 0.2405 | 0.1988 | 0.4545 | 0.4638 | 0.4539 | 0.6486 | 0.2793 |
| No log | 2.0 | 264 | 0.3928 | 0.3344 | 0.6463 | 0.4407 | 0.9376 | 0.2449 | 0.3968 | 0.6292 | 0.5641 | 0.5373 | 0.4583 | 0.3359 |
| No log | 3.0 | 396 | 0.4714 | 0.4419 | 0.6179 | 0.5153 | 0.9533 | 0.3822 | 0.5310 | 0.7536 | 0.6262 | 0.6328 | 0.6857 | 0.3291 |
| 0.5663 | 4.0 | 528 | 0.3741 | 0.4493 | 0.7114 | 0.5507 | 0.9509 | 0.4717 | 0.7241 | 0.6353 | 0.5918 | 0.5714 | 0.6275 | 0.4372 |
| 0.5663 | 5.0 | 660 | 0.4202 | 0.3930 | 0.6870 | 0.5 | 0.9458 | 0.2759 | 0.6525 | 0.65 | 0.5596 | 0.7097 | 0.7368 | 0.3573 |
| 0.5663 | 6.0 | 792 | 0.4676 | 0.4244 | 0.6850 | 0.5241 | 0.9473 | 0.3333 | 0.5949 | 0.7397 | 0.5653 | 0.6988 | 0.7568 | 0.3652 |
| 0.5663 | 7.0 | 924 | 0.5744 | 0.4328 | 0.5955 | 0.5013 | 0.9517 | 0.2585 | 0.6167 | 0.5915 | 0.5825 | 0.6386 | 0.7500 | 0.3824 |
| 0.1503 | 8.0 | 1056 | 0.5340 | 0.4309 | 0.6585 | 0.5209 | 0.9499 | 0.2976 | 0.6299 | 0.7105 | 0.6140 | 0.6708 | 0.7568 | 0.3544 |
| 0.1503 | 9.0 | 1188 | 0.5229 | 0.4628 | 0.6829 | 0.5517 | 0.9531 | 0.4630 | 0.5103 | 0.6087 | 0.625 | 0.6541 | 0.7778 | 0.4493 |
| 0.1503 | 10.0 | 1320 | 0.6287 | 0.4978 | 0.6748 | 0.5729 | 0.9563 | 0.4314 | 0.6500 | 0.7463 | 0.6413 | 0.7432 | 0.7568 | 0.4108 |
| 0.1503 | 11.0 | 1452 | 0.5163 | 0.4571 | 0.7033 | 0.5540 | 0.9519 | 0.3925 | 0.5256 | 0.6024 | 0.6828 | 0.6626 | 0.7368 | 0.4466 |
| 0.0735 | 12.0 | 1584 | 0.6737 | 0.5046 | 0.6687 | 0.5752 | 0.9555 | 0.3883 | 0.6615 | 0.6757 | 0.6074 | 0.7051 | 0.7778 | 0.4577 |
| 0.0735 | 13.0 | 1716 | 0.5849 | 0.44 | 0.6931 | 0.5383 | 0.9480 | 0.3770 | 0.6555 | 0.6479 | 0.5922 | 0.6957 | 0.6512 | 0.4071 |
| 0.0735 | 14.0 | 1848 | 0.8314 | 0.5018 | 0.5793 | 0.5377 | 0.9539 | 0.3 | 0.6549 | 0.6667 | 0.5613 | 0.7361 | 0.7368 | 0.4294 |
| 0.0735 | 15.0 | 1980 | 0.5986 | 0.4549 | 0.6768 | 0.5441 | 0.9506 | 0.3793 | 0.6000 | 0.6667 | 0.6181 | 0.7089 | 0.6829 | 0.3978 |
| 0.0408 | 16.0 | 2112 | 0.7579 | 0.4900 | 0.6443 | 0.5566 | 0.9541 | 0.4103 | 0.6032 | 0.6765 | 0.6238 | 0.7123 | 0.6667 | 0.4217 |
| 0.0408 | 17.0 | 2244 | 0.9175 | 0.5285 | 0.6037 | 0.5636 | 0.9565 | 0.4 | 0.6789 | 0.7692 | 0.5949 | 0.7101 | 0.6857 | 0.4122 |
| 0.0408 | 18.0 | 2376 | 0.7771 | 0.5041 | 0.6179 | 0.5553 | 0.9562 | 0.3684 | 0.6207 | 0.7246 | 0.5842 | 0.7383 | 0.6667 | 0.4353 |
| 0.0226 | 19.0 | 2508 | 0.7992 | 0.5213 | 0.6463 | 0.5771 | 0.9569 | 0.32 | 0.6724 | 0.7353 | 0.6485 | 0.7114 | 0.7179 | 0.4510 |
| 0.0226 | 20.0 | 2640 | 0.7679 | 0.4915 | 0.6463 | 0.5584 | 0.9555 | 0.3304 | 0.6341 | 0.7463 | 0.6093 | 0.7089 | 0.7500 | 0.4426 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
guumaster/blot-monster-diffusion
|
guumaster
| 2022-10-31T13:05:30Z | 0 | 4 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-10-30T16:51:51Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
## Blot monster model
If you like little blot monster, you'd love this fine-tuned SD-1.5 model.
Based on some cute little ink blob monsters from some search and trained for only 5000 steps.
Use **blotmon** in your prompts.
### Simple samples
|  |  |
|---|---|
|  | |
### Using it as base for other prompts
|  |  |
|---|---|
|  | |
### Beautiful characters starting from blot monsters
|  |  |
|---|---|
|  | |
|
Killerw/autotrain-garry-gen1-8-1942865478
|
Killerw
| 2022-10-31T12:42:05Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:Killerw/autotrain-data-garry-gen1-8",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-31T12:04:56Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- Killerw/autotrain-data-garry-gen1-8
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 64.17835964220215
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1942865478
- CO2 Emissions (in grams): 64.1784
## Validation Metrics
- Loss: 2.036
- Accuracy: 0.899
- Macro F1: 0.871
- Micro F1: 0.899
- Weighted F1: 0.878
- Macro Precision: 0.870
- Micro Precision: 0.899
- Weighted Precision: 0.877
- Macro Recall: 0.886
- Micro Recall: 0.899
- Weighted Recall: 0.899
|
ensw/week5-distilbert-base-multilingual-cased-finetuned-eng
|
ensw
| 2022-10-31T12:19:16Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-29T13:36:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: week5-distilbert-base-multilingual-cased-finetuned-eng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# week5-distilbert-base-multilingual-cased-finetuned-eng
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0849
- Precision: 0.2332
- Recall: 0.2525
- F1: 0.2425
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1069 | 1.0 | 924 | 0.0904 | 0.2727 | 0.0545 | 0.0909 | 0.9756 |
| 0.0698 | 2.0 | 1848 | 0.0893 | 0.2898 | 0.1838 | 0.2250 | 0.9777 |
| 0.0516 | 3.0 | 2772 | 0.0849 | 0.2332 | 0.2525 | 0.2425 | 0.9754 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/notzer0c
|
huggingtweets
| 2022-10-31T12:16:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T12:15:25Z |
---
language: en
thumbnail: http://www.huggingtweets.com/notzer0c/1667218577143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580912285500833792/yfBG_atG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">notzer0</div>
<div style="text-align: center; font-size: 14px;">@notzer0c</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from notzer0.
| Data | notzer0 |
| --- | --- |
| Tweets downloaded | 1355 |
| Retweets | 459 |
| Short tweets | 313 |
| Tweets kept | 583 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31ntopsh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notzer0c's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zdcc7xze) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zdcc7xze/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notzer0c')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Zengwei/librispeech-alignments
|
Zengwei
| 2022-10-31T12:11:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-12T14:30:49Z |
See https://github.com/CorentinJ/librispeech-alignments
|
autoevaluate/multi-class-classification
|
autoevaluate
| 2022-10-31T11:27:58Z | 94 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-28T13:27:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-class-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2009
- Accuracy: 0.928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2643 | 1.0 | 1000 | 0.2009 | 0.928 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/is-title-setfit
|
AlekseyKorshuk
| 2022-10-31T11:25:38Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-31T11:13:45Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1980 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1980,
"warmup_steps": 198,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sergiocannata/VANBase-finetuned-brs
|
sergiocannata
| 2022-10-31T11:00:41Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"van",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-31T10:32:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: VANBase-finetuned-brs-finetuned-brs
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5882352941176471
- name: F1
type: f1
value: 0.6956521739130435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VANBase-finetuned-brs-finetuned-brs
This model is a fine-tuned version of [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7056
- Accuracy: 0.5882
- F1: 0.6957
- Precision (ppv): 0.6154
- Recall (sensitivity): 0.8
- Specificity: 0.2857
- Npv: 0.5
- Auc: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:|
| 0.6589 | 6.25 | 100 | 0.6655 | 0.5882 | 0.6316 | 0.6667 | 0.6 | 0.5714 | 0.5 | 0.5857 |
| 0.6262 | 12.49 | 200 | 0.6917 | 0.5294 | 0.6364 | 0.5833 | 0.7 | 0.2857 | 0.4 | 0.4929 |
| 0.4706 | 18.74 | 300 | 0.6776 | 0.5882 | 0.6957 | 0.6154 | 0.8 | 0.2857 | 0.5 | 0.5429 |
| 0.5202 | 24.98 | 400 | 0.7018 | 0.5294 | 0.6 | 0.6 | 0.6 | 0.4286 | 0.4286 | 0.5143 |
| 0.4628 | 31.25 | 500 | 0.6903 | 0.6471 | 0.75 | 0.6429 | 0.9 | 0.2857 | 0.6667 | 0.5929 |
| 0.3525 | 37.49 | 600 | 0.7241 | 0.5294 | 0.6667 | 0.5714 | 0.8 | 0.1429 | 0.3333 | 0.4714 |
| 0.2877 | 43.74 | 700 | 0.8262 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 |
| 0.2921 | 49.98 | 800 | 0.8058 | 0.4706 | 0.64 | 0.5333 | 0.8 | 0.0 | 0.0 | 0.4 |
| 0.3834 | 56.25 | 900 | 0.7864 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 |
| 0.2267 | 62.49 | 1000 | 0.5520 | 0.7647 | 0.8182 | 0.75 | 0.9 | 0.5714 | 0.8 | 0.7357 |
| 0.3798 | 68.74 | 1100 | 0.8722 | 0.4706 | 0.64 | 0.5333 | 0.8 | 0.0 | 0.0 | 0.4 |
| 0.2633 | 74.98 | 1200 | 0.7260 | 0.6471 | 0.7273 | 0.6667 | 0.8 | 0.4286 | 0.6 | 0.6143 |
| 0.3439 | 81.25 | 1300 | 1.0187 | 0.4118 | 0.5455 | 0.5 | 0.6 | 0.1429 | 0.2 | 0.3714 |
| 0.2532 | 87.49 | 1400 | 0.8812 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 |
| 0.0841 | 93.74 | 1500 | 0.8717 | 0.5294 | 0.6923 | 0.5625 | 0.9 | 0.0 | 0.0 | 0.45 |
| 0.3409 | 99.98 | 1600 | 0.7056 | 0.5882 | 0.6957 | 0.6154 | 0.8 | 0.2857 | 0.5 | 0.5429 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
PraveenKishore/ppo-scratch-LunarLander-v2
|
PraveenKishore
| 2022-10-31T10:09:48Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-31T10:01:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -58.68 +/- 70.13
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 200000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'PraveenKishore/ppo-scratch-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
huggingtweets/theysaymaurya
|
huggingtweets
| 2022-10-31T09:57:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-31T09:51:56Z |
---
language: en
thumbnail: http://www.huggingtweets.com/theysaymaurya/1667210268854/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1555549427972026371/XphH7Kiz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ashish Maurya || Web developer || Freelance</div>
<div style="text-align: center; font-size: 14px;">@theysaymaurya</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ashish Maurya || Web developer || Freelance.
| Data | Ashish Maurya || Web developer || Freelance |
| --- | --- |
| Tweets downloaded | 2089 |
| Retweets | 74 |
| Short tweets | 297 |
| Tweets kept | 1718 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nuqechw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theysaymaurya's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gq3qu4o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gq3qu4o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theysaymaurya')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
KellyShiiii/bert-finetuned-ner
|
KellyShiiii
| 2022-10-31T08:32:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-25T22:30:46Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6370
- Precision: 0.5313
- Recall: 0.4530
- F1: 0.4891
- Accuracy: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 125 | 0.5387 | 0.2190 | 0.0552 | 0.0882 | 0.8991 |
| No log | 2.0 | 250 | 0.4241 | 0.3430 | 0.1750 | 0.2317 | 0.9117 |
| No log | 3.0 | 375 | 0.4721 | 0.3502 | 0.1786 | 0.2366 | 0.9088 |
| 0.1529 | 4.0 | 500 | 0.6204 | 0.4300 | 0.2320 | 0.3014 | 0.9134 |
| 0.1529 | 5.0 | 625 | 0.6479 | 0.4470 | 0.2486 | 0.3195 | 0.9104 |
| 0.1529 | 6.0 | 750 | 0.4640 | 0.4532 | 0.4015 | 0.4258 | 0.9220 |
| 0.1529 | 7.0 | 875 | 0.5170 | 0.4288 | 0.4217 | 0.4253 | 0.9224 |
| 0.0229 | 8.0 | 1000 | 0.5846 | 0.5524 | 0.4273 | 0.4818 | 0.9233 |
| 0.0229 | 9.0 | 1125 | 0.5569 | 0.4644 | 0.4328 | 0.4480 | 0.9234 |
| 0.0229 | 10.0 | 1250 | 0.5818 | 0.5502 | 0.4438 | 0.4913 | 0.9258 |
| 0.0229 | 11.0 | 1375 | 0.6183 | 0.5607 | 0.4254 | 0.4838 | 0.9231 |
| 0.0048 | 12.0 | 1500 | 0.6148 | 0.5385 | 0.4254 | 0.4753 | 0.9250 |
| 0.0048 | 13.0 | 1625 | 0.6271 | 0.4896 | 0.4328 | 0.4594 | 0.9255 |
| 0.0048 | 14.0 | 1750 | 0.6475 | 0.5668 | 0.4217 | 0.4836 | 0.9267 |
| 0.0048 | 15.0 | 1875 | 0.6428 | 0.5704 | 0.4328 | 0.4921 | 0.9282 |
| 0.0016 | 16.0 | 2000 | 0.6577 | 0.5487 | 0.4254 | 0.4793 | 0.9270 |
| 0.0016 | 17.0 | 2125 | 0.6688 | 0.5556 | 0.4144 | 0.4747 | 0.9262 |
| 0.0016 | 18.0 | 2250 | 0.6481 | 0.5434 | 0.4383 | 0.4852 | 0.9282 |
| 0.0016 | 19.0 | 2375 | 0.6432 | 0.5428 | 0.4438 | 0.4883 | 0.9289 |
| 0.0007 | 20.0 | 2500 | 0.6370 | 0.5313 | 0.4530 | 0.4891 | 0.9290 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.8.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jwonagel/bert-finetuned-ner
|
jwonagel
| 2022-10-31T08:26:30Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-28T14:54:42Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: jwonagel/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jwonagel/bert-finetuned-ner
This model is a fine-tuned version of [philschmid/gbert-base-germaner](https://huggingface.co/philschmid/gbert-base-germaner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0260
- Validation Loss: 0.0499
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23520, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0800 | 0.0535 | 0 |
| 0.0402 | 0.0500 | 1 |
| 0.0260 | 0.0499 | 2 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ViktorDo/SciBERT-WIKI_Climber_Finetuned
|
ViktorDo
| 2022-10-31T08:04:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T06:37:21Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Climber_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Climber_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0794 | 1.0 | 2364 | 0.0723 |
| 0.0565 | 2.0 | 4728 | 0.0649 |
| 0.0399 | 3.0 | 7092 | 0.0700 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sohamtiwari3120/scideberta-cs-tdm-pretrained-finetuned-ner
|
sohamtiwari3120
| 2022-10-31T06:55:39Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"token-classification",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-26T17:43:25Z |
---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: scideberta-cs-tdm-pretrained-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scideberta-cs-tdm-pretrained-finetuned-ner
This model is a fine-tuned version of [sohamtiwari3120/scideberta-cs-tdm-pretrained](https://huggingface.co/sohamtiwari3120/scideberta-cs-tdm-pretrained) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6836
- Overall Precision: 0.5912
- Overall Recall: 0.6850
- Overall F1: 0.6347
- Overall Accuracy: 0.9609
- Datasetname F1: 0.5882
- Hyperparametername F1: 0.6897
- Hyperparametervalue F1: 0.7619
- Methodname F1: 0.6525
- Metricname F1: 0.7500
- Metricvalue F1: 0.6452
- Taskname F1: 0.5370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Datasetname F1 | Hyperparametername F1 | Hyperparametervalue F1 | Methodname F1 | Metricname F1 | Metricvalue F1 | Taskname F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:---------------------:|:----------------------:|:-------------:|:-------------:|:--------------:|:-----------:|
| No log | 1.0 | 132 | 0.3507 | 0.3972 | 0.6870 | 0.5034 | 0.9410 | 0.4370 | 0.5441 | 0.5814 | 0.6124 | 0.5604 | 0.6207 | 0.3724 |
| No log | 2.0 | 264 | 0.3079 | 0.4066 | 0.7520 | 0.5278 | 0.9430 | 0.4138 | 0.5380 | 0.6222 | 0.5895 | 0.625 | 0.7273 | 0.4340 |
| No log | 3.0 | 396 | 0.3740 | 0.5007 | 0.7195 | 0.5905 | 0.9535 | 0.4882 | 0.6777 | 0.7500 | 0.6254 | 0.6747 | 0.7097 | 0.4962 |
| 0.4014 | 4.0 | 528 | 0.4072 | 0.5161 | 0.7154 | 0.5997 | 0.9540 | 0.5167 | 0.6612 | 0.6374 | 0.6337 | 0.6753 | 0.6061 | 0.5341 |
| 0.4014 | 5.0 | 660 | 0.4088 | 0.5590 | 0.7317 | 0.6338 | 0.9582 | 0.5660 | 0.6667 | 0.7397 | 0.6250 | 0.7226 | 0.75 | 0.5794 |
| 0.4014 | 6.0 | 792 | 0.4810 | 0.5201 | 0.7093 | 0.6002 | 0.9550 | 0.4874 | 0.5970 | 0.6506 | 0.6207 | 0.6708 | 0.6250 | 0.5756 |
| 0.4014 | 7.0 | 924 | 0.5288 | 0.5403 | 0.6809 | 0.6025 | 0.9576 | 0.4915 | 0.6500 | 0.6133 | 0.6255 | 0.7006 | 0.7879 | 0.5389 |
| 0.0912 | 8.0 | 1056 | 0.5281 | 0.5468 | 0.6890 | 0.6097 | 0.9574 | 0.5370 | 0.7143 | 0.6866 | 0.5854 | 0.6939 | 0.7742 | 0.5491 |
| 0.0912 | 9.0 | 1188 | 0.4744 | 0.5371 | 0.7358 | 0.6209 | 0.9560 | 0.5370 | 0.6341 | 0.6753 | 0.6554 | 0.6795 | 0.7059 | 0.5699 |
| 0.0912 | 10.0 | 1320 | 0.5498 | 0.5686 | 0.7073 | 0.6304 | 0.9586 | 0.5370 | 0.6349 | 0.7500 | 0.6553 | 0.7152 | 0.7742 | 0.5573 |
| 0.0912 | 11.0 | 1452 | 0.6424 | 0.5857 | 0.7012 | 0.6383 | 0.9597 | 0.56 | 0.6789 | 0.7246 | 0.6667 | 0.6974 | 0.6875 | 0.5757 |
| 0.0354 | 12.0 | 1584 | 0.5867 | 0.5641 | 0.6890 | 0.6203 | 0.9585 | 0.5185 | 0.6496 | 0.7213 | 0.6619 | 0.7152 | 0.7333 | 0.5402 |
| 0.0354 | 13.0 | 1716 | 0.5500 | 0.5667 | 0.6992 | 0.6260 | 0.9592 | 0.5524 | 0.6829 | 0.7222 | 0.6621 | 0.6466 | 0.7333 | 0.5607 |
| 0.0354 | 14.0 | 1848 | 0.5743 | 0.5780 | 0.7154 | 0.6394 | 0.9596 | 0.5283 | 0.6833 | 0.7222 | 0.6644 | 0.6716 | 0.7742 | 0.5960 |
| 0.0354 | 15.0 | 1980 | 0.6836 | 0.5912 | 0.6850 | 0.6347 | 0.9609 | 0.5882 | 0.6897 | 0.7619 | 0.6525 | 0.7500 | 0.6452 | 0.5370 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
svjack/Stable-Diffusion-Pokemon-zh
|
svjack
| 2022-10-31T06:20:03Z | 0 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"zh",
"Chinese",
"arxiv:2112.10752",
"arxiv:2205.11487",
"arxiv:2010.02502",
"arxiv:2205.12952",
"license:other",
"region:us"
] |
text-to-image
| 2022-10-30T10:12:39Z |
---
language: zh
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- zh
- Chinese
inference: false
extra_gated_prompt: |-
One more step before getting this model.
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. rinna Co., Ltd. claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
# Chinese Stable Diffusion Pokemon Model Card
<!--

-->
Stable-Diffusion-Pokemon-zh is a Chinese-specific latent text-to-image diffusion model capable of generating Pokemon images given any text input.
This model was trained by using a powerful text-to-image model, [diffusers](https://github.com/huggingface/diffusers)
For more information about our training method, see [train_zh_model.py](https://github.com/svjack/Stable-Diffusion-Pokemon/blob/main/train_zh_model.py).
<!--
[](https://colab.research.google.com/github/rinnakk/japanese-stable-diffusion/blob/master/scripts/txt2img.ipynb)
-->
## Model Details
- **Developed by:** Zhipeng Yang
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** Chinese
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Stable Diffusion](https://github.com/CompVis/stable-diffusion) as a pre-trained model.
- **Resources for more information:** [https://github.com/svjack/Stable-Diffusion-Pokemon](https://github.com/svjack/Stable-Diffusion-Pokemon)
## Examples
Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Chinese Stable Diffusion.
```bash
pip install git+https://github.com/rinnakk/japanese-stable-diffusion
pip install diffusers==0.4.1
sudo apt-get install git-lfs
git clone https://huggingface.co/svjack/Stable-Diffusion-Pokemon-zh
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the LMSDiscreteScheduler scheduler:
```python
import torch
import pandas as pd
from torch import autocast
from diffusers import LMSDiscreteScheduler
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer, BertForTokenClassification
from transformers import CLIPProcessor, CLIPModel
import numpy as np
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import *
from japanese_stable_diffusion.pipeline_stable_diffusion import *
class StableDiffusionPipelineWrapper(StableDiffusionPipeline):
@torch.no_grad()
def __call__(
self,
prompt: Union[str, List[str]],
height: int = 512,
width: int = 512,
num_inference_steps: int = 50,
guidance_scale: float = 7.5,
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.0,
generator: Optional[torch.Generator] = None,
latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil",
return_dict: bool = True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: Optional[int] = 1,
**kwargs,
):
if isinstance(prompt, str):
batch_size = 1
elif isinstance(prompt, list):
batch_size = len(prompt)
else:
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
if height % 8 != 0 or width % 8 != 0:
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
if (callback_steps is None) or (
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
):
raise ValueError(
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
f" {type(callback_steps)}."
)
# get prompt text embeddings
text_inputs = self.tokenizer(
prompt,
padding="max_length",
max_length=self.tokenizer.model_max_length,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
logger.warning(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
)
text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
# duplicate text embeddings for each generation per prompt, using mps friendly method
bs_embed, seq_len, _ = text_embeddings.shape
text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
# corresponds to doing no classifier free guidance.
do_classifier_free_guidance = guidance_scale > 1.0
# get unconditional embeddings for classifier free guidance
if do_classifier_free_guidance:
uncond_tokens: List[str]
if negative_prompt is None:
uncond_tokens = [""]
elif type(prompt) is not type(negative_prompt):
raise TypeError(
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
f" {type(prompt)}."
)
elif isinstance(negative_prompt, str):
uncond_tokens = [negative_prompt]
elif batch_size != len(negative_prompt):
raise ValueError(
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
" the batch size of `prompt`."
)
else:
uncond_tokens = negative_prompt
max_length = text_input_ids.shape[-1]
uncond_input = self.tokenizer(
uncond_tokens,
padding="max_length",
max_length=max_length,
truncation=True,
return_tensors="pt",
)
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
seq_len = uncond_embeddings.shape[1]
uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
# For classifier free guidance, we need to do two forward passes.
# Here we concatenate the unconditional and text embeddings into a single batch
# to avoid doing two forward passes
text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
# get the initial random noise unless the user supplied it
# Unlike in other pipelines, latents need to be generated in the target device
# for 1-to-1 results reproducibility with the CompVis implementation.
# However this currently doesn't work in `mps`.
latents_shape = (batch_size * num_images_per_prompt, self.unet.in_channels, height // 8, width // 8)
latents_dtype = text_embeddings.dtype
if latents is None:
if self.device.type == "mps":
# randn does not work reproducibly on mps
latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
self.device
)
else:
latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
else:
if latents.shape != latents_shape:
raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
latents = latents.to(self.device)
# set timesteps
self.scheduler.set_timesteps(num_inference_steps)
# Some schedulers like PNDM have timesteps as arrays
# It's more optimized to move all timesteps to correct device beforehand
timesteps_tensor = self.scheduler.timesteps.to(self.device)
# scale the initial noise by the standard deviation required by the scheduler
latents = latents * self.scheduler.init_noise_sigma
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
# and should be between [0, 1]
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
extra_step_kwargs = {}
if accepts_eta:
extra_step_kwargs["eta"] = eta
for i, t in enumerate(self.progress_bar(timesteps_tensor)):
# expand the latents if we are doing classifier free guidance
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
# predict the noise residual
###text_embeddings
#print("before :" ,text_embeddings.shape)
eh_shape = text_embeddings.shape
if i == 0:
eh_pad = torch.zeros((eh_shape[0], eh_shape[1], 768 - 512))
eh_pad = eh_pad.to(self.device)
text_embeddings = torch.concat([text_embeddings, eh_pad], -1)
#print("after :" ,text_embeddings.shape)
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
# perform guidance
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
# call the callback, if provided
if callback is not None and i % callback_steps == 0:
callback(i, t, latents)
latents = 1 / 0.18215 * latents
image = self.vae.decode(latents).sample
image = (image / 2 + 0.5).clamp(0, 1)
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
image = image.cpu().permute(0, 2, 3, 1).float().numpy()
if self.safety_checker is not None:
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
self.device
)
image, has_nsfw_concept = self.safety_checker(
images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
)
else:
has_nsfw_concept = None
if output_type == "pil":
image = self.numpy_to_pil(image)
if not return_dict:
return (image, has_nsfw_concept)
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012,
beta_schedule="scaled_linear", num_train_timesteps=1000)
#pretrained_model_name_or_path = "zh_model_20000"
#### sudo apt-get install git-lfs
#### git clone https://huggingface.co/svjack/Stable-Diffusion-Pokemon-zh
pretrained_model_name_or_path = "Stable-Diffusion-Pokemon-zh"
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path, subfolder = "tokenizer")
text_encoder = BertForTokenClassification.from_pretrained(pretrained_model_name_or_path, subfolder = "text_encoder")
vae = AutoencoderKL.from_pretrained(pretrained_model_name_or_path, subfolder="vae")
unet = UNet2DConditionModel.from_pretrained(pretrained_model_name_or_path, subfolder="unet")
tokenizer.model_max_length = 77
pipeline_wrap = StableDiffusionPipelineWrapper(
text_encoder=text_encoder,
vae=vae,
unet=unet,
tokenizer=tokenizer,
scheduler=scheduler,
safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"),
feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"),
)
pipeline_wrap.safety_checker = lambda images, clip_input: (images, False)
pipeline_wrap = pipeline_wrap.to("cuda")
imgs = pipeline_wrap("一个头上戴着盆栽的卡通人物",
num_inference_steps = 100
)
image = imgs.images[0]
image.save("output.png")
```
### Generator Results comparison
[https://github.com/svjack/Stable-Diffusion-Pokemon](https://github.com/svjack/Stable-Diffusion-Pokemon)



<!--
_Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._
## Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1._
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with Japanese captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Japanese Stable Diffusion was trained on Japanese datasets including [LAION-5B](https://laion.ai/blog/laion-5b/) with Japanese captions,
which consists of images that are primarily limited to Japanese descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model.
Further, the ability of the model to generate content with non-Japanese prompts is significantly worse than with Japanese-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
We used the following dataset for training the model:
- Approximately 100 million images with Japanese captions, including the Japanese subset of [LAION-5B](https://laion.ai/blog/laion-5b/).
**Training Procedure**
Japanese Stable Diffusion has the same architecture as Stable Diffusion and was trained by using Stable Diffusion. Because Stable Diffusion was trained on English dataset and the CLIP tokenizer is basically for English, we had 2 stages to transfer to a language-specific model, inspired by [PITI](https://arxiv.org/abs/2205.12952).
1. Train a Japanese-specific text encoder with our Japanese tokenizer from scratch with the latent diffusion model fixed. This stage is expected to map Japanese captions to Stable Diffusion's latent space.
2. Fine-tune the text encoder and the latent diffusion model jointly. This stage is expected to generate Japanese-style images more.
[//]: # (_Note: Japanese Stable Diffusion is still running and this checkpoint is the current best one. We might update to a better checkpoint via this repository._)
-->
|
LYTinn/finetuning-sentiment-model-tweet-gpt2
|
LYTinn
| 2022-10-31T06:06:59Z | 316 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T05:23:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-tweet-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-tweet-gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3646
- Accuracy: 0.6908
- F1: 0.6908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
google/maxim-s3-deblurring-realblur-j
|
google
| 2022-10-31T05:10:40Z | 0 | 1 |
keras
|
[
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:realblur_j",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2022-10-19T05:51:38Z |
---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- realblur_j
---
# MAXIM pre-trained on RealBlur-J for image deblurring
MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 32.84 and an SSIM of 0.935.
## Intended uses & limitations
You can use the raw model for image deblurring tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Deblurring/input/1fromGOPR0950.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-deblurring-realblur-j")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
google/maxim-s3-denoising-sidd
|
google
| 2022-10-31T05:10:14Z | 0 | 13 |
keras
|
[
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:sidd",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2022-10-18T18:21:36Z |
---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- sidd
---
# MAXIM pre-trained on SIDD for image denoising
MAXIM model pre-trained for image denoising. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 39.96 and an SSIM of 0.96.
## Intended uses & limitations
You can use the raw model for image denoising tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Denoising/input/0011_23.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-denoising-sidd")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
google/maxim-s2-deraining-rain13k
|
google
| 2022-10-31T05:09:41Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:rain13k",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2022-10-19T06:03:14Z |
---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- rain13k
---
# MAXIM pre-trained on Rain13k for image deraining
MAXIM model pre-trained for image deraining. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 33.24 and an SSIM of 0.933.
## Intended uses & limitations
You can use the raw model for image deraining tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Deraining/input/55.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s2-deraining-rain13k")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
google/maxim-s2-dehazing-sots-indoor
|
google
| 2022-10-31T05:08:51Z | 0 | 2 |
keras
|
[
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:sots-indoor",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2022-10-19T06:22:07Z |
---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- sots-indoor
---
# MAXIM pre-trained on RESIDE-Indoor for image dehazing
MAXIM model pre-trained for image dehazing. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 38.11 and an SSIM of 0.991.
## Intended uses & limitations
You can use the raw model for image dehazing tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Dehazing/input/1440_10.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s2-dehazing-sots-indoor")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
google/maxim-s3-deblurring-gopro
|
google
| 2022-10-31T05:04:49Z | 0 | 19 |
keras
|
[
"keras",
"tf-keras",
"vision",
"maxim",
"image-to-image",
"en",
"dataset:gopro",
"arxiv:2201.02973",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2022-10-18T18:28:35Z |
---
license: apache-2.0
library_name: keras
language: en
tags:
- vision
- maxim
- image-to-image
datasets:
- gopro
---
# MAXIM pre-trained on GoPro for image deblurring
MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 32.86 and an SSIM of 0.961.
## Intended uses & limitations
You can use the raw model for image deblurring tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Deblurring/input/1fromGOPR0950.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-deblurring-gopro")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
summary71/testpyramidsrnd
|
summary71
| 2022-10-31T04:31:35Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-10-31T04:31:30Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: summary71/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
beautifulpichai/all-mpnet-base-v2-ledgar-full-contrastive
|
beautifulpichai
| 2022-10-31T02:24:55Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-31T01:04:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10000 with parameters:
```
{'batch_size': 12, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 10000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
clohre1999/Lohre1
|
clohre1999
| 2022-10-31T00:11:13Z | 0 | 0 | null |
[
"doi:10.57967/hf/0078",
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-31T00:07:16Z |
---
license: creativeml-openrail-m
---
git lfs install
git clone https://huggingface.co/clohre1999/Lohre1
|
sohamtiwari3120/scideberta-cs-tdm-pretrained-finetuned-ner-finetuned-ner
|
sohamtiwari3120
| 2022-10-31T00:08:15Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-30T21:35:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: scideberta-cs-tdm-pretrained-finetuned-ner-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scideberta-cs-tdm-pretrained-finetuned-ner-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7548
- Overall Precision: 0.5582
- Overall Recall: 0.7048
- Overall F1: 0.6230
- Overall Accuracy: 0.9578
- Datasetname F1: 0.6225
- Hyperparametername F1: 0.5707
- Hyperparametervalue F1: 0.6796
- Methodname F1: 0.6812
- Metricname F1: 0.5039
- Metricvalue F1: 0.7097
- Taskname F1: 0.5776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Datasetname F1 | Hyperparametername F1 | Hyperparametervalue F1 | Methodname F1 | Metricname F1 | Metricvalue F1 | Taskname F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:--------------:|:---------------------:|:----------------------:|:-------------:|:-------------:|:--------------:|:-----------:|
| No log | 1.0 | 132 | 0.6819 | 0.2314 | 0.3769 | 0.2867 | 0.9125 | 0.1270 | 0.2305 | 0.2479 | 0.4072 | 0.3119 | 0.0635 | 0.2366 |
| No log | 2.0 | 264 | 0.4337 | 0.3977 | 0.5687 | 0.4681 | 0.9429 | 0.4516 | 0.3704 | 0.5419 | 0.5900 | 0.2446 | 0.4340 | 0.4609 |
| No log | 3.0 | 396 | 0.3968 | 0.3617 | 0.6367 | 0.4613 | 0.9335 | 0.4828 | 0.3586 | 0.5649 | 0.5331 | 0.3190 | 0.4800 | 0.4585 |
| 0.5603 | 4.0 | 528 | 0.3730 | 0.3605 | 0.6327 | 0.4593 | 0.9363 | 0.4750 | 0.3789 | 0.6066 | 0.5376 | 0.3229 | 0.4571 | 0.4375 |
| 0.5603 | 5.0 | 660 | 0.4132 | 0.4650 | 0.6871 | 0.5546 | 0.9482 | 0.4943 | 0.4965 | 0.6577 | 0.6465 | 0.4387 | 0.5306 | 0.5039 |
| 0.5603 | 6.0 | 792 | 0.4071 | 0.4482 | 0.6884 | 0.5429 | 0.9468 | 0.5541 | 0.4341 | 0.5991 | 0.6037 | 0.4865 | 0.64 | 0.5688 |
| 0.5603 | 7.0 | 924 | 0.4077 | 0.4830 | 0.6952 | 0.5700 | 0.9508 | 0.5063 | 0.4953 | 0.7032 | 0.6397 | 0.4286 | 0.6263 | 0.5469 |
| 0.1161 | 8.0 | 1056 | 0.5215 | 0.5426 | 0.6925 | 0.6085 | 0.9577 | 0.6423 | 0.5190 | 0.7115 | 0.6711 | 0.5175 | 0.6286 | 0.5797 |
| 0.1161 | 9.0 | 1188 | 0.5192 | 0.4859 | 0.7020 | 0.5743 | 0.9518 | 0.5578 | 0.5195 | 0.5992 | 0.6571 | 0.4744 | 0.5532 | 0.5611 |
| 0.1161 | 10.0 | 1320 | 0.5301 | 0.5478 | 0.7020 | 0.6154 | 0.9563 | 0.5732 | 0.5782 | 0.7619 | 0.6462 | 0.4675 | 0.7253 | 0.5727 |
| 0.1161 | 11.0 | 1452 | 0.4965 | 0.5139 | 0.7048 | 0.5944 | 0.9531 | 0.5857 | 0.5290 | 0.7189 | 0.6639 | 0.4235 | 0.6476 | 0.5532 |
| 0.049 | 12.0 | 1584 | 0.6207 | 0.5713 | 0.6925 | 0.6261 | 0.9582 | 0.64 | 0.5377 | 0.7594 | 0.7207 | 0.5070 | 0.6136 | 0.5530 |
| 0.049 | 13.0 | 1716 | 0.6056 | 0.5360 | 0.7088 | 0.6104 | 0.9570 | 0.5921 | 0.5035 | 0.7000 | 0.7115 | 0.4648 | 0.6939 | 0.5854 |
| 0.049 | 14.0 | 1848 | 0.6540 | 0.5804 | 0.6925 | 0.6315 | 0.9599 | 0.6466 | 0.5344 | 0.7324 | 0.6874 | 0.5401 | 0.7083 | 0.5980 |
| 0.049 | 15.0 | 1980 | 0.5911 | 0.5068 | 0.7048 | 0.5896 | 0.9528 | 0.5399 | 0.5176 | 0.7150 | 0.6397 | 0.4625 | 0.6800 | 0.5865 |
| 0.0225 | 16.0 | 2112 | 0.5788 | 0.5186 | 0.7007 | 0.5961 | 0.9531 | 0.5874 | 0.5011 | 0.7177 | 0.6796 | 0.4810 | 0.6744 | 0.5517 |
| 0.0225 | 17.0 | 2244 | 0.6097 | 0.5399 | 0.6912 | 0.6062 | 0.9547 | 0.5811 | 0.5744 | 0.6900 | 0.6439 | 0.5033 | 0.7253 | 0.5470 |
| 0.0225 | 18.0 | 2376 | 0.7006 | 0.5714 | 0.6748 | 0.6188 | 0.9590 | 0.6471 | 0.5645 | 0.6465 | 0.6710 | 0.5426 | 0.6809 | 0.5755 |
| 0.0149 | 19.0 | 2508 | 0.6051 | 0.5400 | 0.7252 | 0.6190 | 0.9554 | 0.6443 | 0.5514 | 0.6547 | 0.6777 | 0.5132 | 0.6947 | 0.6 |
| 0.0149 | 20.0 | 2640 | 0.7220 | 0.5995 | 0.6884 | 0.6409 | 0.9605 | 0.6429 | 0.5570 | 0.6806 | 0.7339 | 0.5865 | 0.7416 | 0.5540 |
| 0.0149 | 21.0 | 2772 | 0.6912 | 0.5977 | 0.7034 | 0.6462 | 0.9599 | 0.6377 | 0.5387 | 0.7343 | 0.7281 | 0.5846 | 0.7273 | 0.5899 |
| 0.0149 | 22.0 | 2904 | 0.6952 | 0.5802 | 0.6939 | 0.6320 | 0.9574 | 0.5867 | 0.5445 | 0.7358 | 0.6951 | 0.5736 | 0.7473 | 0.5830 |
| 0.0097 | 23.0 | 3036 | 0.7600 | 0.6241 | 0.6912 | 0.6559 | 0.9618 | 0.6119 | 0.5895 | 0.7629 | 0.7356 | 0.5512 | 0.6897 | 0.5837 |
| 0.0097 | 24.0 | 3168 | 0.7184 | 0.5924 | 0.6980 | 0.6408 | 0.9598 | 0.6486 | 0.5640 | 0.7179 | 0.7146 | 0.5630 | 0.7174 | 0.5714 |
| 0.0097 | 25.0 | 3300 | 0.7120 | 0.5485 | 0.7007 | 0.6153 | 0.9566 | 0.6579 | 0.5441 | 0.6667 | 0.6993 | 0.4774 | 0.6522 | 0.5766 |
| 0.0097 | 26.0 | 3432 | 0.7914 | 0.6009 | 0.7088 | 0.6504 | 0.9583 | 0.6443 | 0.6070 | 0.7293 | 0.7082 | 0.5645 | 0.6737 | 0.5872 |
| 0.0065 | 27.0 | 3564 | 0.7986 | 0.5800 | 0.6952 | 0.6324 | 0.9589 | 0.6309 | 0.5521 | 0.7150 | 0.7281 | 0.4844 | 0.7097 | 0.5714 |
| 0.0065 | 28.0 | 3696 | 0.7767 | 0.6087 | 0.7007 | 0.6515 | 0.9599 | 0.6364 | 0.5824 | 0.7526 | 0.7169 | 0.5238 | 0.7097 | 0.6038 |
| 0.0065 | 29.0 | 3828 | 0.7435 | 0.6077 | 0.6912 | 0.6467 | 0.9612 | 0.6479 | 0.5674 | 0.7396 | 0.7088 | 0.5255 | 0.7333 | 0.6066 |
| 0.0065 | 30.0 | 3960 | 0.8305 | 0.6230 | 0.6857 | 0.6528 | 0.9613 | 0.6483 | 0.5650 | 0.7817 | 0.7341 | 0.4715 | 0.7174 | 0.5962 |
| 0.0051 | 31.0 | 4092 | 0.7180 | 0.5776 | 0.7088 | 0.6365 | 0.9583 | 0.6194 | 0.5825 | 0.7393 | 0.6874 | 0.4923 | 0.7021 | 0.5962 |
| 0.0051 | 32.0 | 4224 | 0.7526 | 0.5708 | 0.6857 | 0.6230 | 0.9585 | 0.64 | 0.5276 | 0.7246 | 0.7083 | 0.4627 | 0.6813 | 0.5922 |
| 0.0051 | 33.0 | 4356 | 0.7548 | 0.5582 | 0.7048 | 0.6230 | 0.9578 | 0.6225 | 0.5707 | 0.6796 | 0.6812 | 0.5039 | 0.7097 | 0.5776 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
uripper/HESS
|
uripper
| 2022-10-30T22:58:54Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"bert",
"fill-mask",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-23T04:20:36Z |
---
license: cc
widget:
- text: "TimeControl: Blitz, BlackElo: [MASK], WhiteElo: 1320, Moves: 1.e4 c6 2.d4 d5 3.Nc3 dxe4 4.Nxe4 Bf5 5.Ng3 Bg6 6.h4 h6 7.Nf3 Nd7 8.h5 Bh7 9.Bd3 Bxd3 10.Qxd3 e6 11.Bf4 Ngf6 12.O-O-O Be7 13.Ne4 Qa5 14.Kb1 O-O 15.Nxf6+ Nxf6 16.Ne5 Rad8 17.Qe2 c5 18.Ng6 fxg6 19.Qxe6+ Kh8 20.hxg6 Ng8 21.Bxh6 gxh6 22.Rxh6+ Nxh6 23.Qxe7 Nf7 24.gxf7 Kg7 25.Rd3 Rd6 26.Rg3+ Rg6 27.Qe5+ Kxf7 28.Qf5+ Rf6 29.Qd7# 1-0"
example_title: "Game 1"
- text: "TimeControl: Classical, BlackElo: 2780, WhiteElo: 2625, Moves: 1.e4 c6 2.d4 d5 3.Nc3 dxe4 4.Nxe4 Bf5 5.[MASK] Bg6 6.h4 h6 7.Nf3 Nd7 8.h5 Bh7 9.Bd3 Bxd3 10.Qxd3 e6 11.Bf4 Ngf6 12.O-O-O Be7 13.Ne4 Qa5 14.Kb1 O-O 15.Nxf6+ Nxf6 16.Ne5 Rad8 17.Qe2 c5 18.Ng6 fxg6 19.Qxe6+ Kh8 20.hxg6 Ng8 21.Bxh6 gxh6 22.Rxh6+ Nxh6 23.Qxe7 Nf7 24.gxf7 Kg7 25.Rd3 Rd6 26.Rg3+ Rg6 27.Qe5+ Kxf7 28.Qf5+ Rf6 29.Qd7# 1-0"
example_title: "Game 2"
---
|
g30rv17ys/ddpm-hkuoct-exu-128-200ep
|
g30rv17ys
| 2022-10-30T22:44:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-30T19:50:01Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-exu-128-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-exu-128-200ep/tensorboard?#scalars)
|
kashing555/bert-base-multilingual-uncased-sentiment-finetuned-mnli
|
kashing555
| 2022-10-30T22:37:05Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-28T08:04:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-uncased-sentiment-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-finetuned-mnli
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7086
- Accuracy: 0.7212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6761 | 1.0 | 4500 | 0.6821 | 0.7091 |
| 0.531 | 2.0 | 9000 | 0.7086 | 0.7212 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
PraveenKishore/a2c-AntBulletEnv-v0
|
PraveenKishore
| 2022-10-30T21:44:28Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-30T21:43:20Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 940.00 +/- 81.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Neverst/finetuning-sentiment-model-3000-samples
|
Neverst
| 2022-10-30T21:18:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T21:10:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
beautifulpichai/all-MiniLM-L12-v2-ledgar-full-contrastive
|
beautifulpichai
| 2022-10-30T20:54:29Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-30T20:51:07Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15000 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 15000,
"warmup_steps": 1500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
motmono/q-Taxi-v3
|
motmono
| 2022-10-30T19:40:19Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-30T19:40:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="motmono/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
motmono/q-FrozenLake-v1-4x4-noSlippery
|
motmono
| 2022-10-30T19:24:32Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-30T19:24:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="motmono/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
torayeff/distilbert-base-uncased-finetuned-imdb
|
torayeff
| 2022-10-30T18:51:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-30T18:34:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
k4black/albert-offensive-lm-tapt
|
k4black
| 2022-10-30T18:48:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-30T18:28:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: albert-offensive-lm-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-offensive-lm-tapt
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0001
- eval_runtime: 16.9087
- eval_samples_per_second: 59.141
- eval_steps_per_second: 1.893
- epoch: 0.39
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
victorbahlangene/deberta-v3-small-fine-Disaster-Tweets-Part2
|
victorbahlangene
| 2022-10-30T18:31:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T18:07:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-small-fine-Disaster-Tweets-Part2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-fine-Disaster-Tweets-Part2
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4849
- Accuracy: 0.8275
- F1: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.4670 | 0.8511 | 0.8503 |
| No log | 2.0 | 406 | 0.4381 | 0.8459 | 0.8455 |
| 0.4016 | 3.0 | 609 | 0.4096 | 0.8424 | 0.8413 |
| 0.4016 | 4.0 | 812 | 0.4849 | 0.8275 | 0.8278 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
BigSalmon/InformalToFormalLincoln87Paraphrase
|
BigSalmon
| 2022-10-30T18:28:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-30T03:53:34Z |
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln87Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln87Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
*Note* Of all the masking techniques, this one works the best.
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
|
api19750904/efeverde-5cat
|
api19750904
| 2022-10-30T18:05:34Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-30T18:05:22Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2500,
"warmup_steps": 250,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
monteiro64/finetuning-sentiment-model-3000-samples
|
monteiro64
| 2022-10-30T17:50:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T17:39:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
debbiesoon/summarise_v8
|
debbiesoon
| 2022-10-30T16:29:08Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-18T01:40:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: summarise_v8
results: []
---

This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the SGH news articles and summaries dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8163
- Rouge2 Precision: 0.3628
- Rouge2 Recall: 0.3589
- Rouge2 Fmeasure: 0.3316
## Model description
This model was created to generate summaries of news articles.
## Intended uses & limitations
The model takes up to maximum article length of 768 tokens and generates a summary of maximum length of 512 tokens, and minimum length of 100 tokens.
## Training and evaluation data
This model was trained on 100+ articles and summaries from SGH.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.5952 | 0.23 | 10 | 1.0414 | 0.2823 | 0.3908 | 0.3013 |
| 1.8116 | 0.47 | 20 | 0.9171 | 0.3728 | 0.273 | 0.3056 |
| 1.6289 | 0.7 | 30 | 0.8553 | 0.3284 | 0.2892 | 0.291 |
| 1.5074 | 0.93 | 40 | 0.8163 | 0.3628 | 0.3589 | 0.3316 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
debbiesoon/bart_large_summarise_v3
|
debbiesoon
| 2022-10-30T16:27:46Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"arxiv:1906.01749",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-27T03:26:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart_large_summarise_v3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.3914
---

This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1359
- Rouge1: 0.3914
- Rouge2: 0.1399
- Rougel: 0.2039
- Rougelsum: 0.3504
- Gen Len: 141.64
## Model description
This model was created to generate summaries of news articles.
## Intended uses & limitations
The model takes up to maximum article length of 1024 tokens and generates a summary of maximum length of 512 tokens.
## Training and evaluation data
This model was trained on 1000 articles and summaries from the Multi-News dataset. https://arxiv.org/abs/1906.01749
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kehanlu/mandarin-wav2vec2-aishell1
|
kehanlu
| 2022-10-30T16:10:18Z | 53 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"wav2vec2.0",
"audio",
"zh",
"dataset:AISHELL-1",
"arxiv:2210.06244",
"arxiv:1808.10583",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-30T11:34:24Z |
---
language:
- "zh"
thumbnail: "Mandarin-wav2vec2.0 fine-tuned on AISHELL-1 dataset"
tags:
- automatic-speech-recognition
- speech
- wav2vec2.0
- audio
datasets:
- AISHELL-1
metrics:
- cer
---
The Mandarin-wav2vec2.0 model is pre-trained on 1000 hours of AISHELL-2 dataset. The pre-training detail can be found at https://github.com/kehanlu/mandarin-wav2vec2. This model is fine-tuned on 178 hours of AISHELL-1 dataset and is the baseline model in the paper "A context-aware knowledge transferring strategy for CTC-based ASR
"([preprint](https://arxiv.org/abs/2210.06244)).
## Results on AISHELL-1
|CER|dev|test|
| - | - | - |
|vanilla w2v2-CTC | 4.85 | 5.13|
## Usage
**Note:** the model is fine-tuned using ESPNET toolkit, then converted to huggingface model for simple usage.
```python
import torch
import soundfile as sf
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
class ExtendedWav2Vec2ForCTC(Wav2Vec2ForCTC):
"""
In ESPNET there is a LayerNorm layer between encoder output and CTC classification head.
"""
def __init__(self, config):
super().__init__(config)
self.lm_head = torch.nn.Sequential(
torch.nn.LayerNorm(config.hidden_size),
self.lm_head
)
model = ExtendedWav2Vec2ForCTC.from_pretrained("kehanlu/mandarin-wav2vec2-aishell1")
processor = Wav2Vec2Processor.from_pretrained("kehanlu/mandarin-wav2vec2-aishell1")
audio_input, sample_rate = sf.read("/path/to/data_aishell/wav/dev/S0724/BAC009S0724W0121.wav")
inputs = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt")
with torch.no_grad():
model.eval()
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])
# 广州市房地产中介协会分析
```
## Licence
The pre-trained corpus, AISHELL-2, is supported by AISHELL fundation. The outcome model also follow the licence of AISHELL-2. It is free to use for academic purpose and should not be used on any commercial purpose without the permission from AISHELL fundation. (https://www.aishelltech.com/aishell_2)
```
@ARTICLE{aishell2,
author = {{Du}, J. and {Na}, X. and {Liu}, X. and {Bu}, H.},
title = "{AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale}",
journal = {ArXiv},
eprint = {1808.10583},
primaryClass = "cs.CL",
year = 2018,
month = Aug,
}
```
if you find this useful, please cite
```
@article{lu2022context,
title={A context-aware knowledge transferring strategy for CTC-based ASR},
author={Lu, Ke-Han and Chen, Kuan-Yu},
journal={arXiv preprint arXiv:2210.06244},
year={2022}
}
```
|
kevinbror/xlmrobertaenepochz
|
kevinbror
| 2022-10-30T15:10:01Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-30T15:09:31Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: xlmrobertaenepochz
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xlmrobertaenepochz
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1485
- Train End Logits Accuracy: 0.6933
- Train Start Logits Accuracy: 0.6537
- Validation Loss: 0.9772
- Validation End Logits Accuracy: 0.7275
- Validation Start Logits Accuracy: 0.6976
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1485 | 0.6933 | 0.6537 | 0.9772 | 0.7275 | 0.6976 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Saripudin/twitter-sentiment-analysis
|
Saripudin
| 2022-10-30T15:00:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T12:58:02Z |
This model for Twitter Sentiment Analysis based on Twitter Pilkada DKI 2017
|
huggingtweets/devxoid
|
huggingtweets
| 2022-10-30T13:38:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-30T13:30:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/devxoid/1667137099058/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1540116269164138496/34gb4zx1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dev ⛓️ HVW DONE!!</div>
<div style="text-align: center; font-size: 14px;">@devxoid</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dev ⛓️ HVW DONE!!.
| Data | dev ⛓️ HVW DONE!! |
| --- | --- |
| Tweets downloaded | 3140 |
| Retweets | 1438 |
| Short tweets | 116 |
| Tweets kept | 1586 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21ppn2hg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @devxoid's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33scwhx4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33scwhx4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/devxoid')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
PoptropicaSahil/indic-bert-finetuned-legal_try_with_muril_more_ft
|
PoptropicaSahil
| 2022-10-30T12:20:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T11:10:03Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: indic-bert-finetuned-legal_try_with_muril_more_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indic-bert-finetuned-legal_try_with_muril_more_ft
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4640
- Accuracy: 0.7865
- F1: 0.7846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4266 | 1.0 | 625 | 0.4652 | 0.7925 | 0.7892 |
| 0.4163 | 2.0 | 1250 | 0.4640 | 0.7905 | 0.7868 |
| 0.4085 | 3.0 | 1875 | 0.4640 | 0.788 | 0.7867 |
| 0.4043 | 4.0 | 2500 | 0.4640 | 0.7865 | 0.7846 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
wskhanh/Roberta-wwm-ext-large-qa
|
wskhanh
| 2022-10-30T11:54:45Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:cmrc2018",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-30T11:23:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cmrc2018
model-index:
- name: Roberta-wwm-ext-large-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-wwm-ext-large-qa
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on the cmrc2018 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5755 | 1.0 | 600 | 1.1028 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
tau/bart-base-sled-summscreenfd
|
tau
| 2022-10-30T11:39:38Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tau/sled",
"en",
"arxiv:2104.07091",
"arxiv:2208.00748",
"arxiv:1910.13461",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-10-30T11:20:50Z |
---
license: mit
language: en
---
# BART-SLED (SLiding-Encoder and Decoder, base-sized model)
SLED models use pretrained, short-range encoder-decoder models, and apply them over
long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder
## Model description
This SLED model is based on the BART model, which is described in its [model card](https://huggingface.co/facebook/bart-base).
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works
well for comprehension tasks (e.g. text classification, question answering). When used as a BART-SLED model, it can be applied on long text tasks.
This model was finetuned on the [SummScreenFD](https://arxiv.org/abs/2104.07091)
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
### How to use
To use the model, you first need to install `py-sled` in your environment (or clone the code from the [official repository](https://github.com/Mivg/SLED/blob/main/README.md))
```
pip install py-sled
```
For more installation instructions, see [here](https://github.com/Mivg/SLED#Installation).
Once installed, SLED is fully compatible with HuggingFace's AutoClasses (AutoTokenizer, AutoConfig, AutoModel
and AutoModelForCausalLM) and can be loaded using the from_pretrained methods
```python
import sled # *** required so that SledModels will be registered for the AutoClasses ***
model = AutoModel.from_pretrained('tau/bart-base-sled')
```
Here is how to use this model in PyTorch:
```python
from sled import SledTokenizer, SledModel
tokenizer = SledTokenizer.from_pretrained('tau/bart-base-sled')
model = SledModel.from_pretrained('tau/bart-base-sled')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
You can also replace SledModel by SledModelForConditionalGeneration for Seq2Seq generation
```python
model = SledModelForConditionalGeneration.from_pretrained('tau/bart-base-sled')
```
In case you wish to apply SLED on a task containing a prefix (e.g. question) which should be given as a context to
every chunk, you can pass the `prefix_length` tensor input as well (A LongTensor in the length of the batch size).
```python
import torch
import sled # *** required so that SledModels will be registered for the AutoClasses ***
tokenizer = AutoTokenizer.from_pretrained('tau/bart-base-sled')
model = AutoModel.from_pretrained('tau/bart-base-sled')
document_input_ids = tokenizer("Dogs are great for you.", return_tensors="pt").input_ids
prefix_input_ids = tokenizer("Are dogs good for you?", return_tensors="pt").input_ids
input_ids = torch.cat((prefix_input_ids, document_input_ids), dim=-1)
attention_mask = torch.ones_like(input_ids)
prefix_length = torch.LongTensor([[prefix_input_ids.size(1)]])
outputs = model(input_ids=input_ids, attention_mask=attention_mask, prefix_length=prefix_length)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the BART [paper](https://arxiv.org/abs/1910.13461) by Lewis et al as well as SummScreenFD by Chen et. al.
```bibtex
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{Chen2022SummScreenAD,
title={SummScreen: A Dataset for Abstractive Screenplay Summarization},
author={Mingda Chen and Zewei Chu and Sam Wiseman and Kevin Gimpel},
booktitle={ACL},
year={2022}
}
```
|
tlttl/tluo_xml_roberta_base_amazon_review_sentiment_v3
|
tlttl
| 2022-10-30T11:23:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T07:54:25Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tluo_xml_roberta_base_amazon_review_sentiment_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment_v3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9456
- Accuracy: 0.6023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.056 | 0.33 | 5000 | 0.9885 | 0.5642 |
| 0.944 | 0.67 | 10000 | 0.9574 | 0.5913 |
| 0.9505 | 1.0 | 15000 | 0.9674 | 0.579 |
| 0.8902 | 1.33 | 20000 | 0.9660 | 0.5945 |
| 0.8851 | 1.67 | 25000 | 0.9470 | 0.5888 |
| 0.8714 | 2.0 | 30000 | 0.9456 | 0.6023 |
| 0.7967 | 2.33 | 35000 | 0.9662 | 0.5978 |
| 0.767 | 2.67 | 40000 | 0.9738 | 0.5987 |
| 0.7595 | 3.0 | 45000 | 0.9740 | 0.5988 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
Tolfx/neatmike
|
Tolfx
| 2022-10-30T10:47:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-30T10:29:44Z |
---
license: creativeml-openrail-m
---
## Neatmike Model
A model based on images of neatmike (messymike)
## Usage
```
neatmike person
```
|
api19750904/efeverde-cat
|
api19750904
| 2022-10-30T09:57:40Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-30T09:57:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1750 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1750,
"warmup_steps": 175,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NlpHUST/vi-word-segmentation
|
NlpHUST
| 2022-10-30T09:45:24Z | 140 | 4 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"word segmentation",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-30T04:48:30Z |
---
widget:
- text: "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội"
tags:
- word segmentation
language:
- vi
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: vi-word-segmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-word-segmentation
This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on an vlsp 2013 vietnamese word segmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.9833
- Recall: 0.9838
- F1: 0.9835
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("NlpHUST/vi-word-segmentation")
model = AutoModelForTokenClassification.from_pretrained("NlpHUST/vi-word-segmentation")
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer)
example = "Phát biểu tại phiên thảo luận về tình hình kinh tế xã hội của Quốc hội sáng 28/10 , Bộ trưởng Bộ LĐ-TB&XH Đào Ngọc Dung khái quát , tại phiên khai mạc kỳ họp , lãnh đạo chính phủ đã báo cáo , đề cập tương đối rõ ràng về việc thực hiện các chính sách an sinh xã hội"
ner_results = nlp(example)
example_tok = ""
for e in ner_results:
if "##" in e["word"]:
example_tok = example_tok + e["word"].replace("##","")
elif e["entity"] =="I":
example_tok = example_tok + "_" + e["word"]
else:
example_tok = example_tok + " " + e["word"]
print(example_tok)
Phát_biểu tại phiên thảo_luận về tình_hình kinh_tế xã_hội của Quốc_hội sáng 28 / 10 , Bộ_trưởng Bộ LĐ - TB [UNK] XH Đào_Ngọc_Dung khái_quát , tại phiên khai_mạc kỳ họp , lãnh_đạo chính_phủ đã báo_cáo , đề_cập tương_đối rõ_ràng về việc thực_hiện các chính_sách an_sinh xã_hội
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0168 | 1.0 | 4712 | 0.0284 | 0.9813 | 0.9825 | 0.9819 | 0.9904 |
| 0.0107 | 2.0 | 9424 | 0.0350 | 0.9789 | 0.9814 | 0.9802 | 0.9895 |
| 0.005 | 3.0 | 14136 | 0.0364 | 0.9826 | 0.9843 | 0.9835 | 0.9909 |
| 0.0033 | 4.0 | 18848 | 0.0434 | 0.9830 | 0.9831 | 0.9830 | 0.9908 |
| 0.0017 | 5.0 | 23560 | 0.0501 | 0.9833 | 0.9838 | 0.9835 | 0.9911 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fumi13/q-Taxi-v3
|
fumi13
| 2022-10-30T09:40:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-30T09:40:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fumi13/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
fumi13/q-FrozenLake-v1-4x4-noSlippery
|
fumi13
| 2022-10-30T09:27:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-30T09:27:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fumi13/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
KaekingPhD/kaeking
|
KaekingPhD
| 2022-10-30T08:46:13Z | 0 | 0 | null |
[
"cecilio",
"angulo",
"esp",
"license:apache-2.0",
"region:us"
] | null | 2022-10-30T08:30:49Z |
---
language:
- esp
tags:
- cecilio
- angulo
license: apache-2.0
---
|
chanet/distilbert-base-uncased-finetuned-emotion
|
chanet
| 2022-10-30T08:40:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-30T08:18:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.923077442442047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8302 | 1.0 | 250 | 0.3143 | 0.908 | 0.9059 |
| 0.2448 | 2.0 | 500 | 0.2133 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Tritkoman/English2Sardinian
|
Tritkoman
| 2022-10-30T07:41:31Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"it",
"dataset:Tritkoman/autotrain-data-gatvotva",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-10-30T07:31:37Z |
---
tags:
- autotrain
- translation
language:
- en
- it
datasets:
- Tritkoman/autotrain-data-gatvotva
co2_eq_emissions:
emissions: 14.908336657166226
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1931765297
- CO2 Emissions (in grams): 14.9083
## Validation Metrics
- Loss: 2.666
- SacreBLEU: 17.990
- Gen len: 64.922
|
bharadwajkg/sample-beauty-cardiffnlp-twitter-roberta-base-sentiment
|
bharadwajkg
| 2022-10-30T05:01:44Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T07:45:57Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: sample-beauty-cardiffnlp-twitter-roberta-base-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sample-beauty-cardiffnlp-twitter-roberta-base-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3954
- Accuracy: 0.9
- F1: 0.6805
- Recall: 0.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
vikas-movva/mnist
|
vikas-movva
| 2022-10-30T04:56:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-29T06:56:39Z |
csv files can be downloaded [here](https://www.kaggle.com/datasets/oddrationale/mnist-in-csv/download?datasetVersionNumber=2)
|
bguan/q-FrozenLake-v1-4x4-noSlippery
|
bguan
| 2022-10-30T04:47:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-25T03:44:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bguan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/frankdegods
|
huggingtweets
| 2022-10-30T04:25:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-09T23:59:52Z |
---
language: en
thumbnail: http://www.huggingtweets.com/frankdegods/1667103905913/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579021841775009792/hUaDp-eu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Frank</div>
<div style="text-align: center; font-size: 14px;">@frankdegods</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Frank.
| Data | Frank |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 992 |
| Short tweets | 539 |
| Tweets kept | 1703 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3nov3knn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @frankdegods's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ld1rac8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ld1rac8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/frankdegods')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/615_btc
|
huggingtweets
| 2022-10-30T03:26:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-30T03:25:10Z |
---
language: en
thumbnail: http://www.huggingtweets.com/615_btc/1667100414400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1351208276046848008/_AzTI7kK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">6.15 BTC</div>
<div style="text-align: center; font-size: 14px;">@615_btc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 6.15 BTC.
| Data | 6.15 BTC |
| --- | --- |
| Tweets downloaded | 716 |
| Retweets | 4 |
| Short tweets | 20 |
| Tweets kept | 692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1pphmhv5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @615_btc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bku8eg4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bku8eg4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/615_btc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ankit15nov/xlm-roberta-base-finetuned-panx-en
|
Ankit15nov
| 2022-10-30T03:26:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-30T03:24:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6984839977540707
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4085
- F1: 0.6985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1067 | 1.0 | 50 | 0.6303 | 0.4922 |
| 0.5183 | 2.0 | 100 | 0.4321 | 0.6524 |
| 0.3688 | 3.0 | 150 | 0.4085 | 0.6985 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.5.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ankit15nov/xlm-roberta-base-finetuned-panx-it
|
Ankit15nov
| 2022-10-30T03:24:38Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-30T03:22:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8199834847233691
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2484
- F1: 0.8200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7739 | 1.0 | 70 | 0.3264 | 0.7482 |
| 0.3054 | 2.0 | 140 | 0.2655 | 0.7881 |
| 0.1919 | 3.0 | 210 | 0.2484 | 0.8200 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.5.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tlttl/tluo_xml_roberta_base_amazon_review_sentiment_v2
|
tlttl
| 2022-10-30T00:51:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T15:21:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tluo_xml_roberta_base_amazon_review_sentiment_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment_v2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9630
- Accuracy: 0.6057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0561 | 0.33 | 5000 | 0.9954 | 0.567 |
| 0.948 | 0.67 | 10000 | 0.9641 | 0.5862 |
| 0.9557 | 1.0 | 15000 | 0.9605 | 0.589 |
| 0.8891 | 1.33 | 20000 | 0.9420 | 0.5875 |
| 0.8889 | 1.67 | 25000 | 0.9397 | 0.592 |
| 0.8777 | 2.0 | 30000 | 0.9236 | 0.6042 |
| 0.778 | 2.33 | 35000 | 0.9612 | 0.5972 |
| 0.7589 | 2.67 | 40000 | 0.9728 | 0.5995 |
| 0.7593 | 3.0 | 45000 | 0.9630 | 0.6057 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Gorilla115/t5-shakespearify-lite
|
Gorilla115
| 2022-10-30T00:19:11Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-09T19:33:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: t5-shakespearify-lite
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-shakespearify-lite
This model was trained from the t5 checkpoint on a custom dataset from [Shakescleare](https://www.litcharts.com/shakescleare/shakespeare-translations). This is a website shakespeare's works have been translated to modern english. This model idealizes style transforms as a translation process as we use the original english as a final translation. The dataset is available on [Kaggle](https://www.kaggle.com/datasets/garnavaurha/shakespearify).
## Model description
The model was trained for 5 epochs with a subset of the dataset. The subset was only about 10k examples long out of the over 50k examples in the raw dataset.
## Intended uses & limitations
This is a novelty project intended for playing around with. However, it has its limitations since it is translating english to english with some minor tweaks. These tweaks maybe changing sentence structure or minor word substitution. It works best unsurprisingly on story based excerpts like below.
```
translate: Why have you come to Mr. Smith with this crap?
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
prakharz/DIAL-T0
|
prakharz
| 2022-10-29T23:39:24Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"arxiv:2205.12673",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-29T23:35:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DIAL_T0
results: []
widget:
- text: "Instruction: Edit the provided response into a response that is fluent and coherent to the dialogue context. \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [RESPONSE] Can describe itit , sir ? It will help us find [ENDOFDIALOGUE] [QUESTION] Given this context and response provided, the edited response is"
- text: "Instruction: Generate a response that starts with the provided initial phrase. \n\nInput: [INITIAL_PHRASE] Please describe [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase is"
- text: "Instruction: Generate a response that starts with the provided initial phrase and contains the provided keywords. \n\nInput: [INITIAL PHRASE] Please describe [KEYWORDS] color, any documents [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase and keywords is"
- text: "Instruction: What is the intent of the response \n\nInput: [CONTEXT] How may I help you? [RESPONSE] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [OPTIONS] booking, reservation change, checkout, lost&found, time information, security, schedules [QUESTION] The intent of the response is"
- text: "Instruction: Generate a summary for the following dialog context. \n\nInput: [CONTEXT] Ann: Wanna go out? [ENDOFTURN] Kate: Not really, I feel sick. [ENDOFTURN] Ann: Drink mint tea, they say it helps. Ok, so we'll meet up another time. Take care! [ENDOFTURN] Kate: Thanks! [ENDOFDIALOGUE] [QUESTION] For this dialogue, the summary is: "
- text: "Instruction: Consider the context of the conversation and a document and generate an answer accordingly \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] What is the response of the following question: Where was the person going to?"
- text: "Instruction: Generate a response using the provided background knowledge. \n\nInput: [KNOWLEDGE] Emailid for cases related to lost and found is x@gmail.com [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] Generate a response using the information from the background knowledge."
---
# InstructDial
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.
[Paper](https://arxiv.org/abs/2205.12673)
# Dial_T0
T5-xl 3B type model trained on InstructDial tasks. This model is a fine-tuned version of bigscience/T0_3B model
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
All tasks in InstructDial framework (including all dialogue eval tasks)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
prakharz/DIAL-BART0
|
prakharz
| 2022-10-29T23:29:10Z | 6 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"arxiv:2205.12673",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-29T22:39:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DIAL_BART0
results: []
widget:
- text: "Instruction: Edit the provided response into a response that is fluent and coherent to the dialogue context. \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [RESPONSE] Can describe itit , sir ? It will help us find [ENDOFDIALOGUE] [QUESTION] Given this context and response provided, the edited response is"
- text: "Instruction: Generate a response that starts with the provided initial phrase. \n\nInput: [INITIAL_PHRASE] Please describe [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase is"
- text: "Instruction: Generate a response that starts with the provided initial phrase and contains the provided keywords. \n\nInput: [INITIAL PHRASE] Please describe [KEYWORDS] color, any documents [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] A response with the provided initial phrase and keywords is"
- text: "Instruction: What is the intent of the response \n\nInput: [CONTEXT] How may I help you? [RESPONSE] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [OPTIONS] booking, reservation change, checkout, lost&found, time information, security, schedules [QUESTION] The intent of the response is"
- text: "Instruction: Generate a summary for the following dialog context. \n\nInput: [CONTEXT] Ann: Wanna go out? [ENDOFTURN] Kate: Not really, I feel sick. [ENDOFTURN] Ann: Drink mint tea, they say it helps. Ok, so we'll meet up another time. Take care! [ENDOFTURN] Kate: Thanks! [ENDOFDIALOGUE] [QUESTION] For this dialogue, the summary is: "
- text: "Instruction: Consider the context of the conversation and a document and generate an answer accordingly \n\nInput: [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] What is the response of the following question: Where was the person going to?"
- text: "Instruction: Generate a response using the provided background knowledge. \n\nInput: [KNOWLEDGE] Emailid for cases related to lost and found is x@gmail.com [CONTEXT] How may I help you? [ENDOFTURN] I left a suitcase on the train to London the other day. [ENDOFDIALOGUE] [QUESTION] Generate a response using the information from the background knowledge."
---
# InstructDial
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.
[Paper](https://arxiv.org/abs/2205.12673)
# Dial_BART0
BART-large type model trained on InstructDial tasks. This model is a fine-tuned version of [yuchenlin/BART0pp](https://huggingface.co/yuchenlin/BART0pp) on the InstructDial datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
All tasks in InstructDial framework (including all dialogue eval tasks)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sd-concepts-library/edgerunners-style-v2
|
sd-concepts-library
| 2022-10-29T23:01:46Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-29T23:01:35Z |
---
license: mit
---
### Edgerunners Style v2 on Stable Diffusion
This is the `<edgerunners-style-av-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









|
beautifulpichai/all-MiniLM-L12-v2-ledgar-contrastive
|
beautifulpichai
| 2022-10-29T22:45:34Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-29T22:45:25Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2451 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2451,
"warmup_steps": 246,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
psdwizzard/Boredape
|
psdwizzard
| 2022-10-29T22:41:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-29T22:41:00Z |
---
license: creativeml-openrail-m
---
|
RafboOrg/ppo-LunarLander-v2
|
RafboOrg
| 2022-10-29T22:04:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-29T21:32:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 216.33 +/- 18.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
beautifulpichai/all-MiniLM-L6-v2-ledgar-contrastive
|
beautifulpichai
| 2022-10-29T21:15:08Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-29T21:14:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2451 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2451,
"warmup_steps": 246,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NikitaBaramiia/dqn-SpaceInvadersNoFrameskip-v4
|
NikitaBaramiia
| 2022-10-29T21:11:12Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-29T21:10:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 451.00 +/- 99.62
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NikitaBaramiia -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NikitaBaramiia -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NikitaBaramiia
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Athithya/finetuning-sentiment-model-3000-samples
|
Athithya
| 2022-10-29T19:52:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T19:31:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ankur-gupta/dummy
|
ankur-gupta
| 2022-10-29T18:36:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-27T21:35:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Stancld/long-t5-tglobal-base
|
Stancld
| 2022-10-29T18:00:11Z | 13 | 0 |
transformers
|
[
"transformers",
"tf",
"longt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-29T17:58:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: long-t5-tglobal-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-base
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
ViktorDo/SciBERT-WIKI_Epiphyte_Finetuned
|
ViktorDo
| 2022-10-29T17:39:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-29T16:21:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Epiphyte_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0782 | 1.0 | 2094 | 0.0624 |
| 0.0591 | 2.0 | 4188 | 0.0481 |
| 0.0278 | 3.0 | 6282 | 0.0530 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
JojoHovvey/borderlands-diffusion
|
JojoHovvey
| 2022-10-29T12:45:30Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-29T09:08:20Z |
---
license: creativeml-openrail-m
---
**Borderlands Diffusion**
This is a fine-tuned Stable Diffusion model trained on screenshots from the video game series Borderlands.
Use the token **_brld_** in your prompts for the style effect.
#### Prompt and settings for portraits:
**brld harrison ford**
_Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3940025417
**brld morgan freeman**
_Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3940025417
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.