modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 06:31:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 06:31:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NasimB/guten-rarity-all-2p5k-plus-wiki-syn-2-14k
|
NasimB
| 2023-07-19T02:08:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T00:11:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity-all-2p5k-plus-wiki-syn-2-14k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity-all-2p5k-plus-wiki-syn-2-14k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7039 | 0.28 | 500 | 5.6292 |
| 5.317 | 0.56 | 1000 | 5.1996 |
| 4.965 | 0.84 | 1500 | 4.9486 |
| 4.6889 | 1.12 | 2000 | 4.8034 |
| 4.5184 | 1.4 | 2500 | 4.6831 |
| 4.4048 | 1.68 | 3000 | 4.5851 |
| 4.3153 | 1.96 | 3500 | 4.4936 |
| 4.0978 | 2.23 | 4000 | 4.4584 |
| 4.0527 | 2.51 | 4500 | 4.4060 |
| 4.0205 | 2.79 | 5000 | 4.3531 |
| 3.9215 | 3.07 | 5500 | 4.3385 |
| 3.7414 | 3.35 | 6000 | 4.3158 |
| 3.741 | 3.63 | 6500 | 4.2861 |
| 3.7226 | 3.91 | 7000 | 4.2584 |
| 3.5373 | 4.19 | 7500 | 4.2741 |
| 3.4612 | 4.47 | 8000 | 4.2627 |
| 3.462 | 4.75 | 8500 | 4.2509 |
| 3.4281 | 5.03 | 9000 | 4.2514 |
| 3.2731 | 5.31 | 9500 | 4.2551 |
| 3.2696 | 5.59 | 10000 | 4.2549 |
| 3.2714 | 5.87 | 10500 | 4.2542 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
leslyarun/bloom_ncbi_finetuned
|
leslyarun
| 2023-07-19T02:04:55Z | 160 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"en",
"dataset:ncbi_disease",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-01T06:25:29Z |
---
language: en
tags:
- text-generation
datasets:
- ncbi_disease
---
# Bloom Model finetuned on the NCBI disease dataset to generate synthetic data similar to the NCBI disease dataset.
|
Falcinspire/q-FrozenLake-v1-4x4-noSlippery
|
Falcinspire
| 2023-07-19T01:53:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-19T01:53:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Falcinspire/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
conorjudge/bert-finetuned-ner
|
conorjudge
| 2023-07-19T01:14:33Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-19T00:56:18Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9300791556728232
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9395302348825587
- name: Accuracy
type: accuracy
value: 0.9856949431918526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0576
- Precision: 0.9301
- Recall: 0.9492
- F1: 0.9395
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0785 | 1.0 | 1756 | 0.0887 | 0.9087 | 0.9318 | 0.9201 | 0.9781 |
| 0.0406 | 2.0 | 3512 | 0.0554 | 0.9236 | 0.9460 | 0.9347 | 0.9856 |
| 0.0257 | 3.0 | 5268 | 0.0576 | 0.9301 | 0.9492 | 0.9395 | 0.9857 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/cbt-rarity-all-p8k-new-loop-2-pad
|
NasimB
| 2023-07-19T01:07:12Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T23:00:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-p8k-new-loop-2-pad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-p8k-new-loop-2-pad
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3461 | 0.29 | 500 | 5.3438 |
| 5.0317 | 0.58 | 1000 | 4.9286 |
| 4.7079 | 0.88 | 1500 | 4.6894 |
| 4.4388 | 1.17 | 2000 | 4.5454 |
| 4.2948 | 1.46 | 2500 | 4.4210 |
| 4.1904 | 1.75 | 3000 | 4.3216 |
| 4.073 | 2.04 | 3500 | 4.2445 |
| 3.8868 | 2.33 | 4000 | 4.2018 |
| 3.8634 | 2.63 | 4500 | 4.1489 |
| 3.8217 | 2.92 | 5000 | 4.0997 |
| 3.6311 | 3.21 | 5500 | 4.0929 |
| 3.5766 | 3.5 | 6000 | 4.0624 |
| 3.5658 | 3.79 | 6500 | 4.0332 |
| 3.483 | 4.08 | 7000 | 4.0303 |
| 3.3086 | 4.38 | 7500 | 4.0283 |
| 3.312 | 4.67 | 8000 | 4.0121 |
| 3.2964 | 4.96 | 8500 | 3.9992 |
| 3.1533 | 5.25 | 9000 | 4.0123 |
| 3.1289 | 5.54 | 9500 | 4.0107 |
| 3.1299 | 5.83 | 10000 | 4.0098 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hoanghoavienvo/roberta-large-stage-one-v3
|
hoanghoavienvo
| 2023-07-19T00:37:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-18T22:32:21Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-stage-one-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-stage-one-v3
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8643
- Accuracy: 0.718
- F1: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6224 | 1.0 | 1502 | 0.5340 | 0.74 | 0.7940 |
| 0.5996 | 2.0 | 3004 | 0.5983 | 0.732 | 0.7991 |
| 0.6033 | 3.0 | 4506 | 0.8643 | 0.718 | 0.7870 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
anonymous4chan/llama-2-70b
|
anonymous4chan
| 2023-07-19T00:17:37Z | 50 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-18T19:41:49Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
## Cuck.
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
StarRing2022/RWKV-4-World-1.5B-Lora
|
StarRing2022
| 2023-07-19T00:13:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T01:58:14Z |
---
license: apache-2.0
---
使用HF的接口很方便地对RWKV在Alpaca格式数据集基于Peft(注意版本为0.2)或RingPeft库 https://github.com/StarRing2022/ChatGPTX-Uni <br>
进行Lora增量微调及部署服务
底座模型:RWKV-4-World-1.5B(StarRing2022/RWKV-4-World-1.5B)
数据集:test.json,测试用
硬件设备:4090单卡,64G内存
训练轮数:100轮
训练耗时:5分钟左右
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVWorld-LoraAlpaca/
|
jrad98/ppo-LunarLander-v2
|
jrad98
| 2023-07-19T00:13:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-19T00:13:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.28 +/- 23.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dsmonk/falcon-7b-tuned-alpaca
|
dsmonk
| 2023-07-19T00:05:10Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-07-18T19:01:09Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-tuned-alpaca
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
giocs2017/ppo-PyramisTraining
|
giocs2017
| 2023-07-19T00:03:03Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-19T00:02:57Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giocs2017/ppo-PyramisTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
takuoko/classic-anime-expressions-lora
|
takuoko
| 2023-07-18T23:55:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-18T23:39:17Z |
---
license: apache-2.0
---
# This is the example of [this PR](https://github.com/huggingface/diffusers/pull/3912)
# Dataset
I used dataset from [civitai classic-anime-expressions](https://civitai.com/models/25613/classic-anime-expressions)
# Example result
#### prompt = '1girl, X X'
Note that there is large influnce of random seed.

#### prompt = '1girl, >_<'

#### prompt = '1girl, @_@'

|
Apocalypse-19/whisper-tiny-en
|
Apocalypse-19
| 2023-07-18T23:47:49Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-18T22:57:11Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.34297520661157027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7823
- Wer Ortho: 0.3492
- Wer: 0.3430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0016 | 17.86 | 500 | 0.7168 | 0.3504 | 0.3394 |
| 0.0004 | 35.71 | 1000 | 0.7683 | 0.3516 | 0.3442 |
| 0.0003 | 53.57 | 1500 | 0.7823 | 0.3492 | 0.3430 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shivaneej/subset_model_flan_t5
|
shivaneej
| 2023-07-18T23:47:33Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-18T22:22:21Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: subset_model_flan_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# subset_model_flan_t5
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2878
- Rouge1: 0.2857
- Rouge2: 0.15
- Rougel: 0.2857
- Rougelsum: 0.2857
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 1.3451 | 0.2857 | 0.15 | 0.2857 | 0.2857 | 19.0 |
| No log | 2.0 | 2 | 1.3067 | 0.2857 | 0.15 | 0.2857 | 0.2857 | 19.0 |
| No log | 3.0 | 3 | 1.2925 | 0.2857 | 0.15 | 0.2857 | 0.2857 | 19.0 |
| No log | 4.0 | 4 | 1.2878 | 0.2857 | 0.15 | 0.2857 | 0.2857 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-ind-simcse_nbrs_r
|
aroot
| 2023-07-18T23:32:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T20:39:45Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-ind-simcse_nbrs_r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-ind-simcse_nbrs_r
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8254
- Bleu: 21.4198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aidiary/distilhubert-finetuned-gtzan
|
aidiary
| 2023-07-18T23:29:34Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:aidiary/distilhubert-finetuned-gtzan",
"base_model:finetune:aidiary/distilhubert-finetuned-gtzan",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-18T11:58:35Z |
---
base_model: aidiary/distilhubert-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [aidiary/distilhubert-finetuned-gtzan](https://huggingface.co/aidiary/distilhubert-finetuned-gtzan) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-deu-simcse_nbrs_r
|
aroot
| 2023-07-18T23:19:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T20:26:06Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-simcse_nbrs_r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-simcse_nbrs_r
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6855
- Bleu: 20.7310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-ind-simcse_nbrs_l
|
aroot
| 2023-07-18T23:15:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T19:58:42Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-ind-simcse_nbrs_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-ind-simcse_nbrs_l
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8605
- Bleu: 21.2163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
giocs2017/ppo-SnowballTarget
|
giocs2017
| 2023-07-18T23:02:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-18T23:02:48Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: giocs2017/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
msomnath/bloomz_ft1b7_sm
|
msomnath
| 2023-07-18T23:00:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T22:59:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
underactuated/opt-350m_ft_v4
|
underactuated
| 2023-07-18T22:58:41Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T22:56:33Z |
---
tags:
- generated_from_trainer
model-index:
- name: opt-350m_ft_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_v4
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DavidFM43/distilhubert-finetuned-gtzan
|
DavidFM43
| 2023-07-18T22:45:40Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-14T19:38:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001115511981046745
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 1.278 | 1.0 | 112 | 0.57 | 1.3298 |
| 0.8315 | 2.0 | 225 | 0.73 | 0.9432 |
| 0.7709 | 3.0 | 337 | 0.72 | 0.9310 |
| 0.5427 | 4.0 | 450 | 0.72 | 0.8738 |
| 0.2645 | 4.98 | 560 | 0.79 | 0.6648 |
| 0.245 | 6.0 | 672 | 0.83 | 0.6147 |
| 0.1331 | 6.99 | 784 | 0.83 | 0.6305 |
| 0.1863 | 8.0 | 896 | 0.6356 | 0.84 |
| 0.0843 | 8.99 | 1008 | 0.6925 | 0.83 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-kor-tok_budget_longest
|
aroot
| 2023-07-18T22:44:43Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T22:31:47Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-tok_budget_longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-tok_budget_longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2224
- Bleu: 5.0033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jordyvl/18-tiny_tobacco3482_kd_NKD_t1.0_g1.5
|
jordyvl
| 2023-07-18T22:42:34Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-18T22:07:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 18-tiny_tobacco3482_kd_NKD_t1.0_g1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 18-tiny_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0957
- Accuracy: 0.805
- Brier Loss: 0.2927
- Nll: 1.1753
- F1 Micro: 0.805
- F1 Macro: 0.7833
- Ece: 0.1572
- Aurc: 0.0655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.7898 | 0.1 | 1.0292 | 9.4902 | 0.1000 | 0.0772 | 0.3220 | 0.9001 |
| No log | 2.0 | 14 | 3.9970 | 0.1 | 0.9420 | 10.0981 | 0.1000 | 0.1071 | 0.2441 | 0.8581 |
| No log | 3.0 | 21 | 3.6641 | 0.075 | 0.8956 | 9.5324 | 0.075 | 0.0777 | 0.1896 | 0.9137 |
| No log | 4.0 | 28 | 3.6014 | 0.18 | 0.8691 | 9.6679 | 0.18 | 0.0781 | 0.2345 | 0.5824 |
| No log | 5.0 | 35 | 3.5833 | 0.23 | 0.8347 | 9.6569 | 0.23 | 0.1572 | 0.2618 | 0.5205 |
| No log | 6.0 | 42 | 3.5576 | 0.44 | 0.7860 | 5.9410 | 0.44 | 0.2946 | 0.3475 | 0.3232 |
| No log | 7.0 | 49 | 3.5400 | 0.575 | 0.7404 | 4.2387 | 0.575 | 0.4638 | 0.4007 | 0.2294 |
| No log | 8.0 | 56 | 3.5319 | 0.545 | 0.7181 | 4.5958 | 0.545 | 0.4482 | 0.3502 | 0.2374 |
| No log | 9.0 | 63 | 3.5405 | 0.52 | 0.7002 | 3.9862 | 0.52 | 0.4101 | 0.3148 | 0.2506 |
| No log | 10.0 | 70 | 3.5341 | 0.61 | 0.6897 | 3.2707 | 0.61 | 0.5118 | 0.3775 | 0.2235 |
| No log | 11.0 | 77 | 3.5259 | 0.66 | 0.6771 | 2.6882 | 0.66 | 0.5201 | 0.4365 | 0.1420 |
| No log | 12.0 | 84 | 3.5215 | 0.66 | 0.6463 | 2.4544 | 0.66 | 0.5387 | 0.3750 | 0.1664 |
| No log | 13.0 | 91 | 3.5363 | 0.58 | 0.6232 | 2.3149 | 0.58 | 0.5090 | 0.3285 | 0.1858 |
| No log | 14.0 | 98 | 3.5161 | 0.675 | 0.6008 | 2.6144 | 0.675 | 0.5411 | 0.3690 | 0.1237 |
| No log | 15.0 | 105 | 3.5073 | 0.67 | 0.5845 | 2.1229 | 0.67 | 0.5577 | 0.3405 | 0.1350 |
| No log | 16.0 | 112 | 3.5272 | 0.67 | 0.5338 | 2.4215 | 0.67 | 0.5603 | 0.3154 | 0.1325 |
| No log | 17.0 | 119 | 3.5332 | 0.695 | 0.5367 | 2.1675 | 0.695 | 0.6056 | 0.3140 | 0.1071 |
| No log | 18.0 | 126 | 3.5659 | 0.655 | 0.4841 | 1.9565 | 0.655 | 0.5559 | 0.2600 | 0.1365 |
| No log | 19.0 | 133 | 3.5438 | 0.69 | 0.4817 | 1.8201 | 0.69 | 0.5735 | 0.2574 | 0.1202 |
| No log | 20.0 | 140 | 3.5019 | 0.74 | 0.4725 | 1.6346 | 0.74 | 0.6486 | 0.2939 | 0.0931 |
| No log | 21.0 | 147 | 3.5236 | 0.755 | 0.4407 | 1.3134 | 0.755 | 0.6811 | 0.2762 | 0.0820 |
| No log | 22.0 | 154 | 3.5303 | 0.755 | 0.4143 | 1.2834 | 0.755 | 0.6843 | 0.2434 | 0.0806 |
| No log | 23.0 | 161 | 3.5541 | 0.77 | 0.4034 | 1.4417 | 0.7700 | 0.6891 | 0.2382 | 0.0842 |
| No log | 24.0 | 168 | 3.5675 | 0.765 | 0.3853 | 1.6692 | 0.765 | 0.7072 | 0.2309 | 0.0807 |
| No log | 25.0 | 175 | 3.5411 | 0.745 | 0.3914 | 1.2777 | 0.745 | 0.6720 | 0.2271 | 0.0784 |
| No log | 26.0 | 182 | 3.5877 | 0.75 | 0.3710 | 1.4838 | 0.75 | 0.6717 | 0.2082 | 0.0789 |
| No log | 27.0 | 189 | 3.6026 | 0.77 | 0.3483 | 1.4211 | 0.7700 | 0.7018 | 0.2089 | 0.0694 |
| No log | 28.0 | 196 | 3.6374 | 0.78 | 0.3365 | 1.3205 | 0.78 | 0.7181 | 0.1953 | 0.0694 |
| No log | 29.0 | 203 | 3.7319 | 0.775 | 0.3538 | 1.2749 | 0.775 | 0.7012 | 0.2149 | 0.0814 |
| No log | 30.0 | 210 | 3.6359 | 0.805 | 0.3291 | 1.3272 | 0.805 | 0.7761 | 0.1991 | 0.0637 |
| No log | 31.0 | 217 | 3.7160 | 0.785 | 0.3337 | 1.2632 | 0.785 | 0.7445 | 0.1727 | 0.0757 |
| No log | 32.0 | 224 | 3.6810 | 0.8 | 0.3234 | 1.3720 | 0.8000 | 0.7636 | 0.1999 | 0.0649 |
| No log | 33.0 | 231 | 3.7139 | 0.82 | 0.3221 | 1.2150 | 0.82 | 0.7919 | 0.2051 | 0.0677 |
| No log | 34.0 | 238 | 3.7286 | 0.795 | 0.3130 | 1.0622 | 0.795 | 0.7575 | 0.1919 | 0.0639 |
| No log | 35.0 | 245 | 3.7807 | 0.795 | 0.3154 | 1.0146 | 0.795 | 0.7672 | 0.1565 | 0.0714 |
| No log | 36.0 | 252 | 3.6802 | 0.815 | 0.3131 | 1.0083 | 0.815 | 0.7933 | 0.2051 | 0.0626 |
| No log | 37.0 | 259 | 3.7369 | 0.81 | 0.3168 | 1.0017 | 0.81 | 0.7862 | 0.1792 | 0.0690 |
| No log | 38.0 | 266 | 3.7638 | 0.82 | 0.2971 | 1.3357 | 0.82 | 0.7977 | 0.1913 | 0.0628 |
| No log | 39.0 | 273 | 3.7415 | 0.825 | 0.2954 | 1.0423 | 0.825 | 0.8072 | 0.1893 | 0.0599 |
| No log | 40.0 | 280 | 3.8005 | 0.785 | 0.3140 | 1.0817 | 0.785 | 0.7453 | 0.1694 | 0.0684 |
| No log | 41.0 | 287 | 3.7901 | 0.82 | 0.3127 | 1.0853 | 0.82 | 0.7993 | 0.1789 | 0.0673 |
| No log | 42.0 | 294 | 3.7811 | 0.825 | 0.3019 | 1.2712 | 0.825 | 0.8020 | 0.1644 | 0.0644 |
| No log | 43.0 | 301 | 3.7689 | 0.81 | 0.3110 | 0.8553 | 0.81 | 0.7932 | 0.1785 | 0.0645 |
| No log | 44.0 | 308 | 3.7796 | 0.82 | 0.2919 | 1.2589 | 0.82 | 0.7972 | 0.1875 | 0.0643 |
| No log | 45.0 | 315 | 3.8005 | 0.805 | 0.3036 | 1.1993 | 0.805 | 0.7789 | 0.1840 | 0.0660 |
| No log | 46.0 | 322 | 3.7811 | 0.82 | 0.2909 | 1.0962 | 0.82 | 0.8004 | 0.1735 | 0.0618 |
| No log | 47.0 | 329 | 3.8145 | 0.8 | 0.3040 | 1.1968 | 0.8000 | 0.7759 | 0.1795 | 0.0671 |
| No log | 48.0 | 336 | 3.7969 | 0.835 | 0.2816 | 1.1019 | 0.835 | 0.8118 | 0.1624 | 0.0603 |
| No log | 49.0 | 343 | 3.8020 | 0.815 | 0.2855 | 1.0383 | 0.815 | 0.7978 | 0.1556 | 0.0639 |
| No log | 50.0 | 350 | 3.8049 | 0.815 | 0.2884 | 1.1121 | 0.815 | 0.7935 | 0.1608 | 0.0616 |
| No log | 51.0 | 357 | 3.8048 | 0.81 | 0.2873 | 1.1173 | 0.81 | 0.7898 | 0.1574 | 0.0632 |
| No log | 52.0 | 364 | 3.8581 | 0.8 | 0.2923 | 1.1257 | 0.8000 | 0.7767 | 0.1436 | 0.0664 |
| No log | 53.0 | 371 | 3.8565 | 0.79 | 0.2984 | 1.0513 | 0.79 | 0.7670 | 0.1622 | 0.0668 |
| No log | 54.0 | 378 | 3.8787 | 0.805 | 0.2901 | 1.0619 | 0.805 | 0.7874 | 0.1335 | 0.0655 |
| No log | 55.0 | 385 | 3.8777 | 0.805 | 0.2940 | 1.0378 | 0.805 | 0.7883 | 0.1450 | 0.0647 |
| No log | 56.0 | 392 | 3.8743 | 0.805 | 0.2906 | 1.1702 | 0.805 | 0.7849 | 0.1610 | 0.0634 |
| No log | 57.0 | 399 | 3.9082 | 0.795 | 0.2959 | 1.0951 | 0.795 | 0.7711 | 0.1761 | 0.0662 |
| No log | 58.0 | 406 | 3.8894 | 0.8 | 0.2898 | 1.0979 | 0.8000 | 0.7816 | 0.1774 | 0.0638 |
| No log | 59.0 | 413 | 3.9005 | 0.825 | 0.2914 | 1.2358 | 0.825 | 0.8088 | 0.1687 | 0.0637 |
| No log | 60.0 | 420 | 3.9115 | 0.815 | 0.2863 | 1.0318 | 0.815 | 0.7928 | 0.1672 | 0.0640 |
| No log | 61.0 | 427 | 3.9172 | 0.805 | 0.2956 | 1.1397 | 0.805 | 0.7884 | 0.1646 | 0.0667 |
| No log | 62.0 | 434 | 3.8993 | 0.82 | 0.2862 | 1.2349 | 0.82 | 0.8001 | 0.1544 | 0.0645 |
| No log | 63.0 | 441 | 3.9334 | 0.825 | 0.2896 | 1.1718 | 0.825 | 0.8061 | 0.1662 | 0.0646 |
| No log | 64.0 | 448 | 3.9179 | 0.815 | 0.2861 | 1.1727 | 0.815 | 0.7966 | 0.1592 | 0.0650 |
| No log | 65.0 | 455 | 3.9489 | 0.8 | 0.2981 | 1.1681 | 0.8000 | 0.7805 | 0.1522 | 0.0674 |
| No log | 66.0 | 462 | 3.9372 | 0.81 | 0.2855 | 1.1041 | 0.81 | 0.7870 | 0.1709 | 0.0647 |
| No log | 67.0 | 469 | 3.9651 | 0.8 | 0.2935 | 1.1723 | 0.8000 | 0.7816 | 0.1492 | 0.0667 |
| No log | 68.0 | 476 | 3.9600 | 0.815 | 0.2903 | 1.1687 | 0.815 | 0.7950 | 0.1466 | 0.0650 |
| No log | 69.0 | 483 | 3.9695 | 0.82 | 0.2908 | 1.1251 | 0.82 | 0.8026 | 0.1532 | 0.0654 |
| No log | 70.0 | 490 | 3.9817 | 0.805 | 0.2915 | 1.1879 | 0.805 | 0.7861 | 0.1537 | 0.0657 |
| No log | 71.0 | 497 | 3.9838 | 0.81 | 0.2899 | 1.1688 | 0.81 | 0.7892 | 0.1538 | 0.0648 |
| 3.4085 | 72.0 | 504 | 3.9960 | 0.805 | 0.2910 | 1.1702 | 0.805 | 0.7904 | 0.1568 | 0.0657 |
| 3.4085 | 73.0 | 511 | 4.0046 | 0.8 | 0.2931 | 1.1743 | 0.8000 | 0.7800 | 0.1529 | 0.0658 |
| 3.4085 | 74.0 | 518 | 4.0115 | 0.815 | 0.2917 | 1.1718 | 0.815 | 0.7968 | 0.1589 | 0.0647 |
| 3.4085 | 75.0 | 525 | 4.0205 | 0.805 | 0.2920 | 1.1719 | 0.805 | 0.7833 | 0.1575 | 0.0654 |
| 3.4085 | 76.0 | 532 | 4.0272 | 0.805 | 0.2919 | 1.1725 | 0.805 | 0.7833 | 0.1547 | 0.0659 |
| 3.4085 | 77.0 | 539 | 4.0323 | 0.81 | 0.2923 | 1.1720 | 0.81 | 0.7892 | 0.1547 | 0.0653 |
| 3.4085 | 78.0 | 546 | 4.0364 | 0.81 | 0.2907 | 1.1715 | 0.81 | 0.7892 | 0.1607 | 0.0650 |
| 3.4085 | 79.0 | 553 | 4.0405 | 0.81 | 0.2910 | 1.1716 | 0.81 | 0.7892 | 0.1451 | 0.0650 |
| 3.4085 | 80.0 | 560 | 4.0476 | 0.81 | 0.2917 | 1.1743 | 0.81 | 0.7892 | 0.1453 | 0.0650 |
| 3.4085 | 81.0 | 567 | 4.0529 | 0.805 | 0.2921 | 1.1736 | 0.805 | 0.7833 | 0.1573 | 0.0654 |
| 3.4085 | 82.0 | 574 | 4.0570 | 0.805 | 0.2919 | 1.1741 | 0.805 | 0.7861 | 0.1717 | 0.0655 |
| 3.4085 | 83.0 | 581 | 4.0601 | 0.81 | 0.2918 | 1.1727 | 0.81 | 0.7892 | 0.1508 | 0.0650 |
| 3.4085 | 84.0 | 588 | 4.0643 | 0.81 | 0.2919 | 1.1743 | 0.81 | 0.7892 | 0.1507 | 0.0652 |
| 3.4085 | 85.0 | 595 | 4.0678 | 0.81 | 0.2922 | 1.1744 | 0.81 | 0.7892 | 0.1552 | 0.0651 |
| 3.4085 | 86.0 | 602 | 4.0743 | 0.81 | 0.2925 | 1.1746 | 0.81 | 0.7892 | 0.1526 | 0.0651 |
| 3.4085 | 87.0 | 609 | 4.0758 | 0.805 | 0.2924 | 1.1753 | 0.805 | 0.7833 | 0.1718 | 0.0653 |
| 3.4085 | 88.0 | 616 | 4.0796 | 0.805 | 0.2924 | 1.1758 | 0.805 | 0.7833 | 0.1567 | 0.0654 |
| 3.4085 | 89.0 | 623 | 4.0803 | 0.81 | 0.2920 | 1.1742 | 0.81 | 0.7892 | 0.1587 | 0.0650 |
| 3.4085 | 90.0 | 630 | 4.0842 | 0.81 | 0.2925 | 1.1744 | 0.81 | 0.7892 | 0.1529 | 0.0651 |
| 3.4085 | 91.0 | 637 | 4.0864 | 0.805 | 0.2926 | 1.1752 | 0.805 | 0.7833 | 0.1568 | 0.0654 |
| 3.4085 | 92.0 | 644 | 4.0880 | 0.81 | 0.2925 | 1.1757 | 0.81 | 0.7892 | 0.1526 | 0.0651 |
| 3.4085 | 93.0 | 651 | 4.0903 | 0.805 | 0.2927 | 1.1752 | 0.805 | 0.7833 | 0.1567 | 0.0654 |
| 3.4085 | 94.0 | 658 | 4.0918 | 0.805 | 0.2927 | 1.1750 | 0.805 | 0.7833 | 0.1572 | 0.0655 |
| 3.4085 | 95.0 | 665 | 4.0927 | 0.805 | 0.2926 | 1.1750 | 0.805 | 0.7833 | 0.1570 | 0.0655 |
| 3.4085 | 96.0 | 672 | 4.0937 | 0.805 | 0.2927 | 1.1751 | 0.805 | 0.7833 | 0.1572 | 0.0655 |
| 3.4085 | 97.0 | 679 | 4.0946 | 0.805 | 0.2926 | 1.1750 | 0.805 | 0.7833 | 0.1573 | 0.0655 |
| 3.4085 | 98.0 | 686 | 4.0950 | 0.805 | 0.2926 | 1.1752 | 0.805 | 0.7833 | 0.1572 | 0.0655 |
| 3.4085 | 99.0 | 693 | 4.0955 | 0.805 | 0.2927 | 1.1753 | 0.805 | 0.7833 | 0.1572 | 0.0655 |
| 3.4085 | 100.0 | 700 | 4.0957 | 0.805 | 0.2927 | 1.1753 | 0.805 | 0.7833 | 0.1572 | 0.0655 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
il18/ppo-Huggy
|
il18
| 2023-07-18T22:41:27Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-18T22:41:18Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: il18/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Falcinspire/ppo-LunarLander-v2
|
Falcinspire
| 2023-07-18T22:39:33Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T22:13:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.08 +/- 17.86
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
coreml-projects/Llama-2-7b-chat-coreml
|
coreml-projects
| 2023-07-18T22:34:22Z | 4,217 | 135 |
transformers
|
[
"transformers",
"coreml",
"llama",
"text-generation",
"meta",
"llama-2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:20:20Z |
---
license: other
tags:
- meta
- coreml
- llama
- llama-2
---
# **Core ML version of Llama 2**
This is a Core ML version of [`meta-llama/Llama-2-7b-chat-hf`](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). For [license](LICENSE.txt) information, model details and acceptable [use policy](USE_POLICY.md), please refer to [the original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
This conversion was performed in `float16` mode with a fixed sequence length of `64`, and is intended for evaluation and test purposes. Please, open a conversation in the `Community` tab if you have questions or want to report an issue.
|
karinthommen/whisper-V4-small-2
|
karinthommen
| 2023-07-18T22:28:22Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-18T17:28:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-V4-small-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-V4-small-2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_nbrs_l
|
aroot
| 2023-07-18T22:27:01Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T19:57:14Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_nbrs_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_nbrs_l
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9966
- Bleu: 3.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-deu-tok_budget_random
|
aroot
| 2023-07-18T22:25:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T22:12:17Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-tok_budget_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-tok_budget_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6856
- Bleu: 20.4422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-deu-tok_budget_longest
|
aroot
| 2023-07-18T22:17:23Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T22:04:22Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-tok_budget_longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-tok_budget_longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7355
- Bleu: 19.3790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
shamikbose89/mt5-small-finetuned-arxiv-cs
|
shamikbose89
| 2023-07-18T22:12:15Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-arxiv-cs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-arxiv-cs
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on a subset of the arxiv dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6922
- Rouge1: 0.7734
- Rouge2: 0.2865
- Rougel: 0.6665
- Rougelsum: 0.6743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 14.0947 | 1.0 | 500 | 2.7666 | 1.2101 | 0.459 | 1.1426 | 1.1385 |
| 2.8524 | 2.0 | 1000 | 1.8208 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2623 | 3.0 | 1500 | 1.6922 | 0.7734 | 0.2865 | 0.6665 | 0.6743 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
aroot/eng-guj-simcse_nbrs_l
|
aroot
| 2023-07-18T22:10:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T19:31:14Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_nbrs_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_nbrs_l
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4395
- Bleu: 2.3465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-tok_budget_longest
|
aroot
| 2023-07-18T22:03:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T21:47:41Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-tok_budget_longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-tok_budget_longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1903
- Bleu: 2.8970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ailabturkiye/Jhin
|
ailabturkiye
| 2023-07-18T21:59:12Z | 0 | 0 | null |
[
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-18T21:55:29Z |
---
license: openrail
language:
- tr
---
Jhin League Of Legends 310 Epoch
Model tamamen bana aittir. Herhangi bir platformda paylaşırken discord linkimizi vermezseniz videonuz kaldırılacaktır.
|
vishalkm/medalpaca-7b
|
vishalkm
| 2023-07-18T21:54:12Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"en",
"arxiv:2303.14070",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T06:57:12Z |
---
license: cc
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
---
# MedAlpaca 7b
## Table of Contents
[Model Description](#model-description)
- [Architecture](#architecture)
- [Training Data](#trainig-data)
[Model Usage](#model-usage)
[Limitations](#limitations)
## Model Description
### Architecture
`medalpaca-7b` is a large language model specifically fine-tuned for medical domain tasks.
It is based on LLaMA (Large Language Model Meta AI) and contains 7 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
Architecture
### Training Data
The training data for this project was sourced from various resources.
Firstly, we used Anki flashcards to automatically generate questions,
from the front of the cards and anwers from the back of the card.
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
to generate questions from the headings and using the corresponding paragraphs
as answers. This dataset is still under development and we believe
that approximately 70% of these question answer pairs are factual correct.
Thirdly, we used StackExchange to extract question-answer pairs, taking the
top-rated question from five categories: Academia, Bioinformatics, Biology,
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
| Source | n items |
|------------------------------|--------|
| ChatDoc large | 200000 |
| wikidoc | 67704 |
| Stackexchange academia | 40865 |
| Anki flashcards | 33955 |
| Stackexchange biology | 27887 |
| Stackexchange fitness | 9833 |
| Stackexchange health | 7721 |
| Wikidoc patient information | 5942 |
| Stackexchange bioinformatics | 5407 |
## Model Usage
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
Inference
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
```python
from transformers import pipeline
pl = pipeline("text-generation", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
question = "What are the symptoms of diabetes?"
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
answer = pl(f"Context: {context}\n\nQuestion: {question}\n\nAnswer: ")
print(answer)
```
## Limitations
The model may not perform effectively outside the scope of the medical domain.
The training data primarily targets the knowledge level of medical students,
which may result in limitations when addressing the needs of board-certified physicians.
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
|
aroot/eng-guj-tok_budget_random
|
aroot
| 2023-07-18T21:50:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T21:29:09Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-tok_budget_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-tok_budget_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3275
- Bleu: 2.7936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-tok_budget_longest
|
aroot
| 2023-07-18T21:47:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T21:28:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-tok_budget_longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-tok_budget_longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6616
- Bleu: 1.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
acdg1214/ppo-LunarLander-v2
|
acdg1214
| 2023-07-18T21:46:09Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T21:45:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.77 +/- 14.25
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shivaneej/subset_model_t5
|
shivaneej
| 2023-07-18T21:34:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-18T21:24:26Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: subset_model_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# subset_model_t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7052
- Rouge1: 0.1
- Rouge2: 0.0
- Rougel: 0.1
- Rougelsum: 0.1
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 1.8253 | 0.1 | 0.0 | 0.1 | 0.1 | 19.0 |
| No log | 2.0 | 2 | 1.7629 | 0.1 | 0.0 | 0.1 | 0.1 | 19.0 |
| No log | 3.0 | 3 | 1.7243 | 0.1 | 0.0 | 0.1 | 0.1 | 19.0 |
| No log | 4.0 | 4 | 1.7052 | 0.1 | 0.0 | 0.1 | 0.1 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_nbrs_l
|
aroot
| 2023-07-18T21:32:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T19:30:29Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_nbrs_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_nbrs_l
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1733
- Bleu: 31.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jordyvl/18-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
|
jordyvl
| 2023-07-18T21:31:20Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-18T20:53:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 18-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 18-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6385
- Accuracy: 0.795
- Brier Loss: 0.4484
- Nll: 0.9250
- F1 Micro: 0.795
- F1 Macro: 0.7709
- Ece: 0.4225
- Aurc: 0.0567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.8736 | 0.105 | 1.0144 | 8.6059 | 0.1050 | 0.0844 | 0.3169 | 0.8967 |
| No log | 2.0 | 14 | 1.2559 | 0.155 | 0.8899 | 7.1587 | 0.155 | 0.1259 | 0.2459 | 0.7824 |
| No log | 3.0 | 21 | 1.0441 | 0.33 | 0.8123 | 5.3633 | 0.33 | 0.2575 | 0.2995 | 0.5173 |
| No log | 4.0 | 28 | 0.9169 | 0.525 | 0.6852 | 3.4671 | 0.525 | 0.4253 | 0.3387 | 0.2892 |
| No log | 5.0 | 35 | 0.8589 | 0.615 | 0.6269 | 3.1119 | 0.615 | 0.5500 | 0.3683 | 0.2124 |
| No log | 6.0 | 42 | 0.7954 | 0.675 | 0.5756 | 2.2578 | 0.675 | 0.5752 | 0.3626 | 0.1550 |
| No log | 7.0 | 49 | 0.7664 | 0.685 | 0.5143 | 1.8811 | 0.685 | 0.6073 | 0.3254 | 0.1390 |
| No log | 8.0 | 56 | 0.7305 | 0.76 | 0.4895 | 1.5449 | 0.76 | 0.6768 | 0.3695 | 0.1016 |
| No log | 9.0 | 63 | 0.7056 | 0.765 | 0.4721 | 1.3575 | 0.765 | 0.6991 | 0.3828 | 0.0927 |
| No log | 10.0 | 70 | 0.6961 | 0.77 | 0.4380 | 1.2662 | 0.7700 | 0.7509 | 0.3549 | 0.0803 |
| No log | 11.0 | 77 | 0.6772 | 0.81 | 0.4508 | 1.3169 | 0.81 | 0.7915 | 0.4175 | 0.0629 |
| No log | 12.0 | 84 | 0.6766 | 0.785 | 0.4491 | 1.2979 | 0.785 | 0.7650 | 0.3839 | 0.0800 |
| No log | 13.0 | 91 | 0.6754 | 0.785 | 0.4382 | 1.2395 | 0.785 | 0.7794 | 0.3609 | 0.0689 |
| No log | 14.0 | 98 | 0.6768 | 0.8 | 0.4472 | 1.2218 | 0.8000 | 0.7837 | 0.3910 | 0.0640 |
| No log | 15.0 | 105 | 0.6793 | 0.81 | 0.4663 | 1.2698 | 0.81 | 0.7856 | 0.4293 | 0.0672 |
| No log | 16.0 | 112 | 0.6784 | 0.795 | 0.4726 | 1.3043 | 0.795 | 0.7728 | 0.4232 | 0.0669 |
| No log | 17.0 | 119 | 0.6638 | 0.805 | 0.4372 | 1.2746 | 0.805 | 0.7747 | 0.3956 | 0.0677 |
| No log | 18.0 | 126 | 0.6588 | 0.8 | 0.4297 | 1.4466 | 0.8000 | 0.7762 | 0.3866 | 0.0686 |
| No log | 19.0 | 133 | 0.6588 | 0.81 | 0.4588 | 1.2093 | 0.81 | 0.7912 | 0.4029 | 0.0702 |
| No log | 20.0 | 140 | 0.6587 | 0.81 | 0.4534 | 1.0697 | 0.81 | 0.7980 | 0.4197 | 0.0641 |
| No log | 21.0 | 147 | 0.6527 | 0.815 | 0.4529 | 1.1527 | 0.815 | 0.7942 | 0.4196 | 0.0598 |
| No log | 22.0 | 154 | 0.6608 | 0.78 | 0.4559 | 1.2039 | 0.78 | 0.7581 | 0.3612 | 0.0725 |
| No log | 23.0 | 161 | 0.6558 | 0.8 | 0.4547 | 1.0687 | 0.8000 | 0.7644 | 0.3964 | 0.0584 |
| No log | 24.0 | 168 | 0.6584 | 0.8 | 0.4491 | 1.2869 | 0.8000 | 0.7735 | 0.3810 | 0.0687 |
| No log | 25.0 | 175 | 0.6493 | 0.805 | 0.4497 | 0.9981 | 0.805 | 0.7887 | 0.4162 | 0.0570 |
| No log | 26.0 | 182 | 0.6425 | 0.795 | 0.4424 | 1.1317 | 0.795 | 0.7790 | 0.3974 | 0.0596 |
| No log | 27.0 | 189 | 0.6518 | 0.8 | 0.4552 | 0.9743 | 0.8000 | 0.7715 | 0.4122 | 0.0592 |
| No log | 28.0 | 196 | 0.6526 | 0.805 | 0.4630 | 1.1343 | 0.805 | 0.7941 | 0.4171 | 0.0672 |
| No log | 29.0 | 203 | 0.6515 | 0.8 | 0.4531 | 1.0062 | 0.8000 | 0.7681 | 0.3970 | 0.0566 |
| No log | 30.0 | 210 | 0.6459 | 0.795 | 0.4534 | 1.0893 | 0.795 | 0.7853 | 0.3972 | 0.0600 |
| No log | 31.0 | 217 | 0.6423 | 0.81 | 0.4483 | 0.9035 | 0.81 | 0.7927 | 0.4297 | 0.0536 |
| No log | 32.0 | 224 | 0.6454 | 0.8 | 0.4517 | 1.1025 | 0.8000 | 0.7688 | 0.3923 | 0.0599 |
| No log | 33.0 | 231 | 0.6417 | 0.805 | 0.4476 | 0.9658 | 0.805 | 0.7767 | 0.4136 | 0.0563 |
| No log | 34.0 | 238 | 0.6399 | 0.815 | 0.4462 | 0.8565 | 0.815 | 0.7940 | 0.4234 | 0.0550 |
| No log | 35.0 | 245 | 0.6430 | 0.81 | 0.4505 | 1.0491 | 0.81 | 0.7855 | 0.4279 | 0.0629 |
| No log | 36.0 | 252 | 0.6440 | 0.815 | 0.4481 | 1.0288 | 0.815 | 0.7813 | 0.4132 | 0.0539 |
| No log | 37.0 | 259 | 0.6396 | 0.82 | 0.4493 | 0.9477 | 0.82 | 0.8125 | 0.4266 | 0.0525 |
| No log | 38.0 | 266 | 0.6410 | 0.815 | 0.4462 | 1.0462 | 0.815 | 0.7971 | 0.4157 | 0.0522 |
| No log | 39.0 | 273 | 0.6360 | 0.8 | 0.4399 | 0.9645 | 0.8000 | 0.7779 | 0.3974 | 0.0566 |
| No log | 40.0 | 280 | 0.6376 | 0.805 | 0.4412 | 0.8777 | 0.805 | 0.7772 | 0.4104 | 0.0544 |
| No log | 41.0 | 287 | 0.6411 | 0.795 | 0.4475 | 0.9240 | 0.795 | 0.7780 | 0.4062 | 0.0583 |
| No log | 42.0 | 294 | 0.6398 | 0.795 | 0.4509 | 0.9279 | 0.795 | 0.7650 | 0.4068 | 0.0577 |
| No log | 43.0 | 301 | 0.6430 | 0.79 | 0.4567 | 0.9279 | 0.79 | 0.7683 | 0.4073 | 0.0590 |
| No log | 44.0 | 308 | 0.6401 | 0.8 | 0.4495 | 0.9915 | 0.8000 | 0.7744 | 0.4200 | 0.0565 |
| No log | 45.0 | 315 | 0.6364 | 0.795 | 0.4448 | 0.9245 | 0.795 | 0.7729 | 0.4115 | 0.0568 |
| No log | 46.0 | 322 | 0.6391 | 0.79 | 0.4472 | 1.0060 | 0.79 | 0.7633 | 0.4044 | 0.0561 |
| No log | 47.0 | 329 | 0.6376 | 0.795 | 0.4470 | 0.9530 | 0.795 | 0.7693 | 0.3989 | 0.0578 |
| No log | 48.0 | 336 | 0.6383 | 0.8 | 0.4476 | 0.9992 | 0.8000 | 0.7804 | 0.4084 | 0.0579 |
| No log | 49.0 | 343 | 0.6353 | 0.8 | 0.4424 | 0.8500 | 0.8000 | 0.7756 | 0.4055 | 0.0546 |
| No log | 50.0 | 350 | 0.6381 | 0.795 | 0.4470 | 0.9931 | 0.795 | 0.7691 | 0.4170 | 0.0573 |
| No log | 51.0 | 357 | 0.6374 | 0.795 | 0.4477 | 0.9729 | 0.795 | 0.7630 | 0.4076 | 0.0563 |
| No log | 52.0 | 364 | 0.6377 | 0.8 | 0.4481 | 0.9846 | 0.8000 | 0.7759 | 0.4212 | 0.0555 |
| No log | 53.0 | 371 | 0.6378 | 0.795 | 0.4485 | 0.9379 | 0.795 | 0.7733 | 0.4052 | 0.0565 |
| No log | 54.0 | 378 | 0.6385 | 0.79 | 0.4477 | 0.9900 | 0.79 | 0.7684 | 0.4165 | 0.0571 |
| No log | 55.0 | 385 | 0.6371 | 0.81 | 0.4466 | 0.9178 | 0.81 | 0.7867 | 0.4149 | 0.0546 |
| No log | 56.0 | 392 | 0.6373 | 0.795 | 0.4460 | 0.9254 | 0.795 | 0.7692 | 0.4081 | 0.0568 |
| No log | 57.0 | 399 | 0.6376 | 0.79 | 0.4476 | 0.9194 | 0.79 | 0.7596 | 0.3996 | 0.0568 |
| No log | 58.0 | 406 | 0.6380 | 0.79 | 0.4477 | 0.9259 | 0.79 | 0.7619 | 0.4024 | 0.0575 |
| No log | 59.0 | 413 | 0.6377 | 0.8 | 0.4474 | 0.9100 | 0.8000 | 0.7806 | 0.4096 | 0.0569 |
| No log | 60.0 | 420 | 0.6378 | 0.8 | 0.4481 | 0.9189 | 0.8000 | 0.7806 | 0.4076 | 0.0566 |
| No log | 61.0 | 427 | 0.6378 | 0.795 | 0.4478 | 0.9860 | 0.795 | 0.7709 | 0.3994 | 0.0566 |
| No log | 62.0 | 434 | 0.6380 | 0.795 | 0.4480 | 0.9189 | 0.795 | 0.7692 | 0.4070 | 0.0564 |
| No log | 63.0 | 441 | 0.6381 | 0.8 | 0.4482 | 0.9195 | 0.8000 | 0.7806 | 0.4047 | 0.0568 |
| No log | 64.0 | 448 | 0.6379 | 0.8 | 0.4480 | 0.9223 | 0.8000 | 0.7806 | 0.4224 | 0.0563 |
| No log | 65.0 | 455 | 0.6382 | 0.8 | 0.4481 | 0.9196 | 0.8000 | 0.7806 | 0.4113 | 0.0569 |
| No log | 66.0 | 462 | 0.6381 | 0.8 | 0.4484 | 0.9200 | 0.8000 | 0.7806 | 0.4308 | 0.0566 |
| No log | 67.0 | 469 | 0.6379 | 0.8 | 0.4479 | 0.9198 | 0.8000 | 0.7806 | 0.4186 | 0.0566 |
| No log | 68.0 | 476 | 0.6378 | 0.8 | 0.4476 | 0.9167 | 0.8000 | 0.7806 | 0.4166 | 0.0569 |
| No log | 69.0 | 483 | 0.6380 | 0.8 | 0.4481 | 0.9179 | 0.8000 | 0.7806 | 0.4254 | 0.0566 |
| No log | 70.0 | 490 | 0.6384 | 0.795 | 0.4486 | 0.9225 | 0.795 | 0.7709 | 0.4158 | 0.0566 |
| No log | 71.0 | 497 | 0.6380 | 0.795 | 0.4476 | 0.9211 | 0.795 | 0.7709 | 0.4215 | 0.0568 |
| 0.5133 | 72.0 | 504 | 0.6381 | 0.795 | 0.4480 | 0.9232 | 0.795 | 0.7709 | 0.4151 | 0.0566 |
| 0.5133 | 73.0 | 511 | 0.6380 | 0.795 | 0.4479 | 0.9242 | 0.795 | 0.7709 | 0.4218 | 0.0564 |
| 0.5133 | 74.0 | 518 | 0.6380 | 0.795 | 0.4478 | 0.9231 | 0.795 | 0.7709 | 0.4151 | 0.0566 |
| 0.5133 | 75.0 | 525 | 0.6382 | 0.795 | 0.4484 | 0.9245 | 0.795 | 0.7709 | 0.4156 | 0.0565 |
| 0.5133 | 76.0 | 532 | 0.6382 | 0.795 | 0.4481 | 0.9216 | 0.795 | 0.7709 | 0.4153 | 0.0567 |
| 0.5133 | 77.0 | 539 | 0.6382 | 0.795 | 0.4481 | 0.9231 | 0.795 | 0.7709 | 0.4222 | 0.0567 |
| 0.5133 | 78.0 | 546 | 0.6382 | 0.795 | 0.4481 | 0.9210 | 0.795 | 0.7709 | 0.4220 | 0.0565 |
| 0.5133 | 79.0 | 553 | 0.6382 | 0.795 | 0.4480 | 0.9220 | 0.795 | 0.7709 | 0.4220 | 0.0565 |
| 0.5133 | 80.0 | 560 | 0.6384 | 0.795 | 0.4484 | 0.9220 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 81.0 | 567 | 0.6383 | 0.795 | 0.4483 | 0.9218 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 82.0 | 574 | 0.6382 | 0.795 | 0.4480 | 0.9220 | 0.795 | 0.7709 | 0.4221 | 0.0568 |
| 0.5133 | 83.0 | 581 | 0.6384 | 0.795 | 0.4484 | 0.9240 | 0.795 | 0.7709 | 0.4157 | 0.0566 |
| 0.5133 | 84.0 | 588 | 0.6384 | 0.795 | 0.4484 | 0.9262 | 0.795 | 0.7709 | 0.4224 | 0.0566 |
| 0.5133 | 85.0 | 595 | 0.6382 | 0.795 | 0.4481 | 0.9235 | 0.795 | 0.7709 | 0.4221 | 0.0566 |
| 0.5133 | 86.0 | 602 | 0.6384 | 0.795 | 0.4484 | 0.9236 | 0.795 | 0.7709 | 0.4225 | 0.0566 |
| 0.5133 | 87.0 | 609 | 0.6384 | 0.795 | 0.4484 | 0.9235 | 0.795 | 0.7709 | 0.4225 | 0.0567 |
| 0.5133 | 88.0 | 616 | 0.6384 | 0.795 | 0.4483 | 0.9250 | 0.795 | 0.7709 | 0.4224 | 0.0566 |
| 0.5133 | 89.0 | 623 | 0.6384 | 0.795 | 0.4483 | 0.9244 | 0.795 | 0.7709 | 0.4223 | 0.0567 |
| 0.5133 | 90.0 | 630 | 0.6384 | 0.795 | 0.4483 | 0.9251 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 91.0 | 637 | 0.6384 | 0.795 | 0.4484 | 0.9246 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 92.0 | 644 | 0.6384 | 0.795 | 0.4484 | 0.9256 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 93.0 | 651 | 0.6385 | 0.795 | 0.4484 | 0.9252 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 94.0 | 658 | 0.6384 | 0.795 | 0.4484 | 0.9245 | 0.795 | 0.7709 | 0.4223 | 0.0565 |
| 0.5133 | 95.0 | 665 | 0.6385 | 0.795 | 0.4484 | 0.9254 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 96.0 | 672 | 0.6384 | 0.795 | 0.4484 | 0.9242 | 0.795 | 0.7709 | 0.4225 | 0.0566 |
| 0.5133 | 97.0 | 679 | 0.6384 | 0.795 | 0.4484 | 0.9242 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 98.0 | 686 | 0.6385 | 0.795 | 0.4484 | 0.9249 | 0.795 | 0.7709 | 0.4224 | 0.0567 |
| 0.5133 | 99.0 | 693 | 0.6385 | 0.795 | 0.4484 | 0.9252 | 0.795 | 0.7709 | 0.4224 | 0.0566 |
| 0.5133 | 100.0 | 700 | 0.6385 | 0.795 | 0.4484 | 0.9250 | 0.795 | 0.7709 | 0.4225 | 0.0567 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
crcdng/a2c-AntBulletEnv-v0
|
crcdng
| 2023-07-18T21:29:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T21:22:58Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1859.70 +/- 599.36
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-fra-tok_budget_random
|
aroot
| 2023-07-18T21:28:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-18T21:08:48Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-tok_budget_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-tok_budget_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1528
- Bleu: 32.1323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ocean3/SuperMix
|
Ocean3
| 2023-07-18T21:26:15Z | 0 | 5 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"safetensors",
"art",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-15T23:40:59Z |
---
language:
- en
thumbnail: "https://huggingface.co/Ocean3/SuperMix/resolve/main/img/img1.png"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- safetensors
- art
license: creativeml-openrail-m
---
<!--comment-->
# 🍍 SuperMix

<div align="center">
<a href="https://huggingface.co/Ocean3/SuperMix/tree/main/1)%20Versions">Models</a> | <a href="./SuperMix#previews">Previews</a> | <a href="https://huggingface.co/Ocean3/SuperMix/tree/main/3)%20Alternate%20Versions">Alt Versions</a> | <a href="https://civitai.com/models/89213?modelVersionId=94946" target="_blank">CivitAI Page</a></div>
**SuperMix** is an Anime focused Text-to-Image diffuser model capable of bringing out semi-realistic tones through detailing, lighting, textures, and other aspects of the composition. At the same time, this merged model is very versatile with the amount of styling, forms, and mediums you can choose to generate with outputs through chosen parameters. SuperMix is great with:
* Portraits
* Anime
* Semi-Realism
* Scenery
* Concept Art
* Detailed Textures
* Detailed Backgrounds
* Vehicles, Architecture, Food
* & More!
This mix started out as a spontaneous combination of various anime focused models. I took note of some of the details the merge had excelled at - then decided to create a mix highlighting those aspects continuing from there. After some iterations and branch tests, I decided this mix was decent enough to share with others as is without going too far with variations.
I still consider myself newer to generated art and such in general, so if you see anything to be corrected or to improve upon, let me know 👌
I would love to see what people create with the outputs of this model, feel free to use the tag **#SuperMix** on various platforms if you decide to post anything!
<div align="center"><a href="https://civitai.com/models/89213?modelVersionId=94946" target="_blank">CivitAI Page</a></div>
<br><div align="center"><p style="font-size:90%; background-color:#f5f6ff; color:#173978;">Note</p></div>
<p style="font-size:90%;">SuperMix1 is an older rough-merged model mixed at the end of 2022 from various models known at the time. As such, this model and merge-components are fairly dated and may be harder to manage at times with current webUI updates etc. There are many great models available now with similar styles and flexibility that may be easier to use depending on your style preference. If this model receives any future updates, any new version will be geared at ironing out any prevalent issues in this version, removing any license limitations, and finetuning to a better standard.</p>
---
# Previews
<img src="https://huggingface.co/Ocean3/SuperMix/resolve/main/img/img2.png" title=previews>
Below are some preview images with various configurations and prompt styles ranging from simple to more complex prompt and parameter range. SuperMix can be a very powerful model capable of many different styles, don't be afraid to use this model(s) the way you find best. You can view more examples over on the <a href="https://civitai.com/models/89213?modelVersionId=94946" target="_blank">CivitAI</a> pages as well.
<br>Click to expand each category.
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Anime</summary>
<div align="center">
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a1.png" title="Aspiring Heights">
<figcaption><i>Aspiring Heights - hires upscale, img2img upscale, prompt via tarabm246</i></figcaption>
<small>
```
furry raccoon girl, 1girl, solo, multicolored eyes, raccoon ears, two-tone hair,
(high quality, best quality), body fur, animal nose, sunset, horizon, mountain edge,
long hair, gray coat, from behind, tail, upper body, snow, winter, smile
```
```
(worst quality, low quality:1.4), looking at viewer
```
```
Steps: 20, Sampler: Euler a, CFG scale: 5.5, Seed: 41866449, Size: 512x512,
Denoising strength: 0.58, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a2.png" title="Cosmic Flowers">
<figcaption><i>Cosmic Flowers - hires upscale, img2img upscale, initially sourced prompt</i></figcaption>
<small>
```
extreme quality, cg, (bright colors:0.8), ultra-detailed, illustration, impasto,
painting, 1girl, large white jacket, long jacket, short legs, short, forest, mystery,
mysterious forest, girl investigator, tall boots, red flowers, starry sky, stars,
nebula, white hair, walking, walking through the forest, relaxed expression, night,
nebula sky, planets, ((red flowers)), solo, anime wallpaper, high quality wallpaper,
official wallpaper, masterpiece, best quality, 8k
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), ugly face
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 757843492, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a3.png" title="">
<figcaption><i>Stars Align - hires upscale, img2img upscale, light touch-up, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors:0.9), (light pastel
colors:1.4), photo of a curious girl, (ancient), (tan skin), fashion,
light dust, patio, (depth of field:0.76), (fog), medium hair, long hair,
white hair, masterpiece, 8k, tone mapping, hyper focus, white, blue eyes,
upper body:0.8), natural body, limited palette, (detailed hair:1.13),
dynamic angle, (pastel drawing:0.7), (black outlines), (pastel background),
soft lighting, (fox girl), solo, clarity, (by Antonio Maria Panni:1.6),
(raised eyebrows:0.8), hero attire, (plants, modern:1.2), colorful, bold,
vivid, (creative), (starry sky), (random:1.4)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.4),
(worst quality:1.4), ugly, old, deformed, amateur drawing, odd, fat,
cell shading, lowres, bad anatomy, text, error, cropped, low quality,
normal quality, jpeg artifacts, watermark, username, blurry, out of focus,
watercolor, (nsfw:1.6), (cleavage:1.6)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 638426066, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B,
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a4.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors:0.9), a cute girl,
(dark skin), colored outlines, curly hair, red hair, orange, masterpiece,
8k, (tone mapping, hyper focus:0.7), aqua, (random:1.4)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.4),
(worst quality:1.4), ugly, old, deformed, amateur drawing, odd,
fat, cell shading, lowres, bad anatomy, text, error, cropped,
low quality, normal quality, jpeg artifacts, watermark, username,
blurry, out of focus, watercolor, (nsfw:1.6), (cleavage:1.6)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 8, Seed: 3599973939,
Size: 512x768, Model hash: 1504f30200, Model: SuperMix1,
Denoising strength: 0.58, Hires upscale: 2, Hires upscaler:
R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a5.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, initially sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.3), absurdres, highres,
best quality, 1girl, victorian, outdoors, bush, foliage, scenery, dusk,
colorful clouds, dark, stars, reflection, (iridescent:1.5), meteor,
multicolored hair, :3, full body, swirling clouds, arms out stretched,
(from behind:1.1), glowing hair, silhouette, arms up, silver dress, conductor
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.4),
(worst quality:1.4), ugly, old, deformed, amateur drawing, odd, fat, cell shading,
lowres, bad anatomy, text, error, cropped, low quality, normal quality,
jpeg artifacts, watermark, username, blurry, out of focus, watercolor,
(nsfw:1.4), (cleavage:1.4)
```
```
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3071813954, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.4,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a6.png" title="">
<figcaption><i>Untitled - method, hires upscale, self-prompted</i></figcaption>
<small>
```
a photo of a cute girl in an utopian city, brown hair, short hair, brown eyes,
messy hair, tan skin, (detailed texture), picturesque, day, dappled sunlight,
outdoors, masterpiece, 8k, (tone mapping, hyper focus:0.5), limited palette,
serious, (varied depth of field:0.8), complimentary colors, (wizard),
wizard robes,
wizard hat, magic, purple, (cat girl)
```
```
ugly, old, deformed, amateur drawing, odd, fat, cell shading, cel shading, lowres,
bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits,
cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry,
out of focus, watercolor, (worst quality, low quality:1.4), blurry, earmuffs, text,
lowres, error, bad anatomy, bad hands, missing fingers, extra digit, fewer digits,
(cropped:1.2), normal quality, watermark, username, (signature:1.4), (text), (author),
deformed, amateur drawing, long neck, extra fingers, by bad-artist, missing fingers,
image sample, jpeg artifacts, gif artifacts, wallpaper forced, lossy-lossless,
lossless-lossy, corrupted file, duplicate, redrawn, screenshot, game screenshot,
bad art, amateur drawing, odd, ((merged limbs)), ((conjoined limbs)),
(poorly drawn:1.3), poorly drawn hands, poorly drawn face, deformities, conjoined,
stretched torso, (heterochromia), (disproportioned), bad face, (bad details), sloppy,
sitting, (tanlines), (staff:1.5), (wand:1.5), (weapon:1.5)
```
```
Steps: 18, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 2879140632, Size: 512x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58,
Hires resize: 832x1024, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B,
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a7.png" title="">
<figcaption><i>Untitled - method, hires upscale, initially sourced prompt</i></figcaption>
<small>
```
girl in jungle, epic, intricate, smirk, from above, muscular, standing,
(thunder rain storm, aura:1.1), blonde hair, tiger ears, messy hair,
slit pupils, red eyes, black jacket, high collar
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), ugly face
```
```
Steps: 20, Sampler: Euler a, CFG scale: 10, Seed: 2044019025, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a8.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, initially sourced prompt</i></figcaption>
<small>
```
detailed background, superb, 1girl, long hair light purple hair, curly hair, cute,
eyelashes, sitting, white dress, pink ribbon around her waist, pink flats,
white thighhighs, beautiful 8k wallpaper, outdoors, nature, tree, bush, flower,
rustic, extremely detailed, intricate
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.4), (worst quality:1.4),
ugly, old, deformed, amateur drawing, odd, fat, cell shading, lowres, bad anatomy,
text, error, cropped, low quality, normal quality, jpeg artifacts, watermark,
username, blurry, out of focus, watercolor, (nsfw, cleavage:1.3)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2568503293, Size: 512x704,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.39, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a9.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.0), (highest quality:1.12), (HDR:1.0), 1girl, solo, flat colors,
colorful, animal ear fluff, solo, plants, (coral), gradient background, smooth lighting,
(splash art:0.8), portrait, (upper body:0.85), (random:1.4)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (cleavage)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3693442341, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a10.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, LoRa, initially sourced prompt</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a11.png" title="">
<figcaption><i>Untitled - hires upscale, majority self-prompted</i></figcaption>
<small>
```
photo of cute army girl, detailed face+eyes, tactical clothing, (white fox girl,
animal ear fluff, (fluffy hair), medium hair, attractive, yellow eyes, picturesque,
sporty, (dark skin:1.2), (tactical mask), upper body, dynamic angle, mad,
by Jeremy Lipking, by Antonio J Manzanedo, (by Alphonse Mucha:0.5), masterpiece, (pov),
metal, foggy snowy jungle, varied depth of field, captain
```
```
censorship, ugly, old, deformed, amateur drawing, odd, fat, tall, anime, cell shading,
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits,
cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark,
username, blurry, out of focus, cell shading, anime, watercolor, (gun:1.5), (rifle:1.5)
```
```
Steps: 20, Sampler: DDIM, CFG scale: 16, Seed: 1312696386, Size: 512x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Hires resize: 768x896,
Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a12.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.0), (highest quality:1.12), (HDR:1.0), 1girl, solo, flat colors, colorful,
animal ear fluff, solo, plants, (tan), gradient background, smooth lighting,
(splash art:0.8), portrait, (upper body:0.85), (random:1.4)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 149160650, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a13.png" title="">
<figcaption><i>Untitled - hires upscale, initially sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), 1girl, yellow eyes, baseball cap,
lue hair, closed mouth, (shoulder armor:1.2), black background, hoop earrings, jewelry,
looking at viewer, shirt, long hair, (simple background:0.8), (abstract background:1.2),
solo, upper body, purple shirt, gold trim, (luxury:0.8)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 8, Seed: 1100956050, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a14.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), high quality, imp girl,
special ambience, (flat color:0.8), (limited palette), high contrast, cg unity wallpaper,
hyper focus, tone mapping, depth mapping, above clouds, starry sky, plants, tropic,
1girl, golden eyes, looking away, portrait, parted lips, (ethereal), indigo skin,
shorts, wave, pretty face, fantasy, (magical:0.7)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3428515668, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a15.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (colorful), (light pastel colors:1.4),
photo of a righteous girl, (detailed background), (tan skin:1.2), fashion,
light dust, (field), long hair, hair up, silver hair, masterpiece, 8k, tone mapping,
hyper focus, yellow, hawk eyes, (upper body:0.8), natural body, limited palette,
(detailed hair:1.13), (Ufotable aesthetic:1.3), (pastel drawing:0.7), (black outlines),
(pastel background), soft lighting, (cat girl:1.3), solo, clarity, (by Vladimir Makovsky:1.3),
(by Sam Haskins:1.3)
```
```
(hands), (long arms), nsfw, censorship, ugly, old, deformed, amateur drawing, odd, fat,
cell shading, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature,
watermark, username, blurry, out of focus, watercolor, (worst quality, low quality:1.4),
heterochromia, asymmetrical eyes, tears, (tanlines:1.3), (denim), (brush), (vibrant),
(hdr), (shiny skin), (expressionless:0.76), (bold colors), (ufo:1.5)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 8.5, Seed: 2964409537, Size: 512x704,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.39, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a16.png" title="">
<figcaption><i>Untitled - hires upscale, initially sourced prompt</i></figcaption>
<small>
```
1girl, night city, rain, coat, hands in pockets, white hair, long hair, (fox ears),
fluff, (dark skin:1.2), full lips, pretty face, anime illustration, purple eyes
```
```
(worst quality:1.6), (low quality:1.6), EasyNegative
```
```
Steps: 29, Sampler: Euler a, CFG scale: 7, Seed: 2865357824, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.4, Clip skip: 2,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a17.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
realistic photo of an anime girl, (outdoors), detailed face+eyes, (detailed texture),
picturesque, day, dappled sunlight, attractive, full lips, short hair, wavy hair, parted hair,
parted bangs, forehead, hair intakes, blonde hair, hawk eyes, white eyes, masterpiece,
varied depth of field, limited palette, (cute), bandana, landscape, (tan skin),
bracelet, orange shirt, sleeveless, whisker markings, (varied depth of field:0.8), looking,
orange, ambient lighting
```
```
nsfw, censored, ugly, old, deformed, amateur drawing, odd, fat, tall, cel shading,
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits,
cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, out of focus,
watercolor, (worst quality, low quality:1.4), blurry, earmuffs, text, lowres, error,
bad anatomy, bad hands, missing fingers, extra digit, fewer digits, (cropped:1.2),
normal quality, watermark, username, (signature:1.4), (text), (author), deformed,
amateur drawing, long neck, extra fingers, by bad-artist, missing fingers, image sample,
jpeg artifacts, gif artifacts, wallpaper forced, lossy-lossless, lossless-lossy,
corrupted file, duplicate, redrawn, screenshot, game screenshot, bad art, amateur drawing,
odd, ((merged limbs)), ((conjoined limbs)), (poorly drawn:1.3), poorly drawn hands,
poorly drawn face, deformities, conjoined, stretched torso, (heterochromia),
(disproportioned), bad face, (bad details), sloppy, anime, facepaint, (wings), (tail),
(animal), (cleavage:1.3)
```
```
Steps: 20, Sampler: DDIM, CFG scale: 13, Seed: 2210442777, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Hires upscale: 1.8,
Hires upscaler: 4x-UltraSharp, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a18.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.0), (highest quality:1.12), (HDR:1.0), a girl, illustration, cover art,
(black:1.2), (portrait), coral background, splash, (animal ear fluff:0.7)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 826826098, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Clip skip: 2,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a19.png" title="">
<figcaption><i>Untitled - hires upscale, initially sourced prompt</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors:0.8), (anime girl), 1girl,
pink hair, hair bobbles, dark theme, soothing tones, muted colors, elf ears,
high contrast, (natural skin texture, hyperrealism, soft light, sharp), exposure blend,
medium shot, bokeh, (hdr:1.3), high contrast, (cinematic,teal and red:0.85),
(muted colors, dim colors, soothing tones:1.3), low saturation, (hyperdetailed:1.2),
(noir:0.4), two horns, dress, all white eyes, masterpiece, top tier, extravagant, 8k
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), , (blue eyes:1.2), ugly face
```
```
Steps: 20, Sampler: Euler a, CFG scale: 9, Seed: 3969600209, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.39,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B,
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a20.png" title="">
<figcaption><i>Untitled - hires upscale, sourced prompt</i></figcaption>
<small>
```
Girl with multicoloured hair, black hair, red hair, heavy rain, bad weather,
black clouds, moonlight, sad, rain drops, flower field, (masterpiece:1.4),
(highres), wet hair, looking_at_viewer, eye_contact, (extremely detailed background:1.2),
hair_flower
```
```
(worst quality, low quality:1.4), bad anatomy, extra fingers, extra hand,
crooked fingers, badly sized fingers, cropped
```
```
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 2939377891, Size: 768x512,
Denoising strength: 0.54, Clip skip: 2, Hires upscale: 2, Hires steps: 18,
Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Anime/a21.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors), (anime girl),
1girl, impact, (winter), blizzard, time stop, sci fi, (tribal cat), eskimo,
animal ear fluff, fur trim, winter hat, angry, clouds, tan skin, (feather headdress),
cloth, masterpiece, top tier, extravagant, 8k, unity wallpaper, unreal engine 5,
ray tracing, 8k, cinematic, depth of field, octane render, intricate details, elegant,
one mapping, hyper focus, parted lips, (violet), dappled sunlight, (snowing), nature,
winter coat, upper body, (morning glow), lighthouse, (gold eyes), horizon
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), , (blue eyes:1.2), ugly face
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7.6, Seed: 168479386, Size: 768x512,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Hires upscale: 2,
Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">General</summary>
<div align="center">
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g1.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, self-prompted</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g2.png" title="">
<figcaption><i>Untitled - hires upscale, initially sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), abstract 1998 african white hair hiphop girl
by sachin teng x supreme, attractive, stylish, designer, coral, asymmetrical,
geometric shapes, graffiti, street art
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 1031384908, Size: 512x704,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, LoRA: Contrast_LowRA(0.15), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g3.png" title="">
<figcaption><i>Untitled - hires upscale, sourced prompt</i></figcaption>
<small>
```
olpntng style, Closeup of a black leopard, ferns, surrealistic, dreamlike,
intricate details, pastel colors, dramatic intricate environment, butterfly,
lumen reflections, highly detailed digital painting, smooth, sharp focus,
Esao Andrews – Ernst Haeckel, digital art, oil painting, heavy strokes, paint dripping,
8k, fur texture, oil painting, heavy strokes, paint dripping
```
```
blurry, (out of frame), (signature), (signatures), watermark, out of focus, poorly drawing,
bad drawing, blur haze, cropped, cropping, extra features, extra rows of teeth, deformities,
weird eyes
```
```
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 1900795000, Size: 512x768,
Denoising strength: 0.58, Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g4.png" title="">
<figcaption><i>Untitled - hires upcale, img2img upscale, iniitially sourced prompt</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g5.png" title="">
<figcaption><i>Untitled - older hires upscale, img2img upscale, sourced prompt</i></figcaption>
<small>
```
an epic fantastic realism comic book style portrait painting of a japanese robotic geisha
with USSR tattoos and decals, apex legends, octane render, intricate detail, 4 k hd,
unreal engine 5, ex machina, irobot, gerald brom, photorealistic, modelshoot style, kuvshinov,
nvinkpunk
```
```
disfigured, kitsch, ugly, oversaturated, grain, low-res, Deformed, blurry, bad anatomy,
disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands,
missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus,
long neck, long body, ugly, disgusting, poorly drawn, childish, mutilated, , mangled, old,
surreal, text
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3086446090, Size: 640x960,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g6.png" title="">
<figcaption><i>Untitled - hires upscale, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), closeup of a rusted android in a
corner of a basement, looking down, desolated, sad, sitting, concept art, character design,
Unreal engine, vray, volumetric fog, sunbeam, insanely detailed, weathered, corroded,
oxidized, rusted, decayed, flaking paint, vignette
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 619942128, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g7.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.0), (highest quality:1.12), (HDR:1.0), 1boy , (close-up:1.5),
look at side, beard, blue, suit jacket, card background, (white background:1.5),
[(background:1.4)::5], illustration, colorfantasystyle, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2919182440, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.25), Color Fantasy(0.4),
Add_detail(0.15), Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g8.png" title="">
<figcaption><i>Untitled - older hires upscale, initially sourced prompt</i></figcaption>
<small>
```
photorealistic, ,best quality,masterpiece,highly detailed,ultra-detailed,a futuristic
muscle car in a cyberpunk city at night with neon lights and rain. by josan gonzalez
splash art graphic design color scheme minimalism ultra realistic unreal engine 5 hd
8k resolution trending on deviantart pinterest dslr highly rendered 4K imax
hyperrealistic full colour cinematic, metal, top tier, extravagant, 8k, unity wallpaper,
unreal engine 5, (ray tracing), 8k, depth of field, octane render, intricate details,
elegant, tone mapping, hyper focus, shine, reflective surface
```
```
(tatoo, things on face :1.2),(watermark:1.2),(bored photo:1.2),no color, blurry, ugly,
poor quality, deformed hands, deformed face, deformed body, extra limbs, low quality,
normal quality, text, errors, bad anatomy, mutation, deformed fingers, missing fingers,
low res, bad hands, cropped, deformed hands, (deformed legs:1.2), (deformed arms:1.2),
(multiple arms:1.2), (signature:1.2),bad_bad,, (long body :1.3), bad anatomy , liquid body,
malformed, mutated,anatomical nonsense ,bad proportions,
uncoordinated body, unnatural body, disfigured, ugly, gross proportions ,mutation,
disfigured, deformed, (mutation, poorlydrawn :1.2), (nsfw:1.2) ,lowres,bad anatomy,
bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,worst quality,
low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,missing arms,
long neck,Humpbacked
```
```
Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 637736715, Size: 960x640,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g9.png" title="">
<figcaption><i>Untitled - older hires upscale, textural inversion, self-prompted</i></figcaption>
<small>
```
realistic photo of a (lotus bloom), profile picture, icon, logo, simple background,
extreme quality, masterpiece, 8k, depth of field, intricate details, __artist*__
```
```
censorship, ugly, old, deformed, amateur drawing, odd, fat, lowres, bad anatomy,
bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, ((blurry)),
((out of focus)), watercolor, (worst quality, low quality:1.4), (seeds), grain, (hand)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7.6, Seed: 166478748, Size: 1024x1024,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g10.png" title="">
<figcaption><i>Untitled - older hires upscale, majority sourced prompt</i></figcaption>
<small>
```
((a potion bottle filled with magical elements)), magical, (workshop background with
lots of other bottles and tools:1.1), intricate detail, hyper detailed, ultra realistic,
sharp focus, octane render, volumetric, ray tracing, artstation trending, cgsociety,
sense of awe, mystical, 4k, High Saturation Clarity Contrast, deep levels, sharp, retouched,
color graded, top tier, extravagant, 8k, unity wallpaper, unreal engine 5, ray tracing, 8k,
octane render, intricate details, elegant, tone mapping, hyper focus, close up,
varied depth of field
```
```
3d, digital art, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature,
watermark, username, blurry, artist name, veil, scales, comic panels, gore, blood,
black and white, nsfw, pattern, patterns
```
```
Steps: 20, Sampler: DDIM, CFG scale: 10.5, Seed: 3209935207, Size: 640x960,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g11.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, initially sourced prompt</i></figcaption>
<small>
```
photorealistic, best quality, masterpiece, highly detailed, ultra-detailed, a
futuristic sports car in a cyberpunk city at night with neon lights and rain.
by josan gonzalez splash art graphic design color scheme minimalism ultra realistic
unreal engine 5 hd 8k resolution trending on deviantart pinterest dslr highly rendered
4K imax hyperrealistic full colour cinematic, metal, top tier, extravagant, 8k,
unity wallpaper, unreal engine 5, ray tracing, 8k, depth of field, octane render,
intricate details, elegant, tone mapping, hyper focus, sheen, nijimecha, SMM, fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 4010446135, Size: 640x512,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, LoRa: NijiMecha(0.5),
Cool and Stylish(0.35), Add_detail(0.25), fantasy00d(0.15),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g12.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g13.png" title="">
<figcaption><i>Untitled - hires upscale, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), Cake, tiramisu, flowers, fruit,
cream, intricate detail, dark background, HD Photography
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 3909418144, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g14.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), (ultra-detailed), cupcake,
rainbow sprinkles, photograph, decorated, cherry-on-top, pink chocolate drizzle,
food photography
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry)
```
```
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 279206068, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Clip skip: 2,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, LoRa: Contrast_LowRA(0.15),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g15.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors), anime man, barbarian, fit,
short beard, glowing eyes, perpetual, impact, gladiator glory, throne, time stop,
space age, (powerful), (holy halo), joyful, shape background, warrior, clouds,
(fantasy:0.8), tan skin, helm, cape, aura, glass walkway, upper body, metal,
masterpiece, top tier, extravagant, 8k, unity wallpaper, unreal engine 5, ray tracing,
8k, cinematic, depth of field, octane render, intricate details, elegant, tone mapping,
hyper focus, close up, upper body, (blue), dappled sunlight, small gold particles,
short hair
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
censorship, ugly, old, deformed, amateur drawing, odd, fat, lowres, bad anatomy, bad hands,
text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality,
normal quality, jpeg artifacts, signature, watermark, username, ((blurry)), ((out of focus)),
watercolor, (worst quality, low quality:1.4), blurry, text, (heterochromia:1.3), (feminine),
(shirtless:1.3)
```
```
Steps: 28, Sampler: Euler a, CFG scale: 7.6, Seed: 2144691385, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/General/g16.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, initially sourced prompt</i></figcaption>
<small>
```
best quality,masterpiece,highly detailed,ultra-detailed, RAW, analog style,
( 1 futuristic sports car no humans:1.2), high detailed skin, skin details, sharp focus,
volumetric fog, 8k uhd, dslr, high quality, film grain, Fujifilm XT3 metal, top tier,
extravagant, 8k, unity wallpaper, unreal engine 5, ray tracing, 8k, depth of field,
octane render, intricate details, elegant, tone mapping, hyper focus, shine, 111cine8matic55
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, signature, copyright, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 40, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 694693295, Size: 768x448,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: CinematicStyle(0.5), Add_detail(0.6),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">LoRa</summary>
<div align="center">
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l1.png" title="Unbreakable">
<figcaption><i>Unbreakable - hires upscale, img2img upscale, LoRa, intially sourced prompt</i></figcaption>
<small>
```
extreme quality, cg, detailed face, (bright colors:1.0), (anime), 1girl, (fox ears,
animal ear fluff, (fluffy hair), white hair, medium hair, gold lens sunglasses,
(sporty:0.8):1.2), (dark skin:1.2), solo, floating hair, looking at viewer, cute serious,
smirk, glowing, animated, jacket, glitch, cinematic lighting, strong contrast, high level
of detail, (flat color:0.6), masterpiece, best quality, 8k, white background, broken glass,
explosion), tactical clothing, wearing sunglasses, bj_Fault art, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), (fire:1.3)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 907543725, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.4),
Fault Art(0.55), FilmVelvia2(-0.15), Add_detail(0.25), Niji Default Style_v2(0.15),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l2.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, initially sourced prompt</i></figcaption>
<small>
```
masterpiece, best quality, 1girl, closed eyes, upper body, splashing, abstract, psychedelic,
neon, (honeycomb pattern), (creative:1.3), sy3, SMM, fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 2160912965, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, LoRa: Bubble Drip(0.45), Cool and Stylish(0.45),
Add_detail(0.15), fantasy00d(0.15), Discard penultimate sigma: True,
ersion: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l3.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), detailed face+eyes, (1girl), solo,
wearing tribal headdress, tribal aesthetic, ultra-detailed, highres, absurdres, (hair flaps),
(gamma:1.3), (creative:1.3), negative space, starlit path, long hair, (explosion wave:1.2),
sound barrier, time stop, extreme quality, cg unity wallpaper, anime, (marroon palette), SMM,
bj_fault art
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), center line, split,
vertical line, (fire)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2504153053, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.65), Fault Art(0.5),
FilmVelvia2(0.15), Add_detail(0.15), Niji Default Style_v2(0.2),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l4.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), a cute kitten, animal, fluffy, solo,
(adorable), natural lighting, teal and yellow, (expressive cartoon), expressive face,
(synthwave:1.2)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3312163897, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: FilmVelvia2(-0.25), Add_detail(-0.05),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l5.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, detailed face+eyes, (bright colors), (anime), 1girl, impact, (winter),
blizzard, time stop, sci fi, (tribal cat), (eskimo), animal ear fluff, fur trim, clouds,
tan skin, (feather headdress), masterpiece, top tier, extravagant, 8k, unity wallpaper,
unreal engine 5, ray tracing, 8k, cinematic, varied depth of field, octane render,
elegant, tone mapping, hyper focus, parted lips, (indigo), dappled sunlight, (snowing),
nature, winter coat, upper body, (morning glow), lighthouse, gold eyes, horizon,
picturesque scenery, mountain, forest, looking at viewer, (tundra), SMM, bj_fault art
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1921118488, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish_SMM(0.25), Fault Art(0.4),
Add_detail(0.15), FilmVelvia2(0.1), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l6.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), 1girl, mecha, robot, armor, bodysuit,
mechanical arms, mysterious expression, magical, magical effects like sparkles or energy,
flowing robes, mystical background, rim lighting, side lighting, cinematic light,
ultra high res, 8k uhd, film grain, best shadow, delicate, RAW, light particles,
detailed skin texture, detailed cloth texture, beautiful detailed face, intricate details,
ultra detailed, mecha musume, mechanical arms, headgear, bodysuit, (plants:1.3), gold,
luxury, (purple), (looking at viewer), nijimecha, SMM, fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3882651620, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Clip skip: 2,
Hires upscale: 2, Hires upscaler: 4x-UltraSharp, LoRa: NijiMecha(0.65),
Cool and Stylish_SMM(0.35), Add_detail(0.25), fantasy00d(0.15),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l7.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, (bright colors:0.8), high quality, a beautiful girl with tiger ears,
flat color, (limited palette), high contrast, golden eyes, looking up at viewer, upper body,
portrait, ethereal, (blue skin), crop hoodie, pretty face, natural sunlight, masterpiece,
best quality, 8k, 111cine8matic55
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (shirtless:1.3), ugly face
```
```
Steps: 18, Sampler: Euler a, CFG scale: 7, Seed: 1387472521, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: CinematicStyle(0.65),
Add_detail(0.15), Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l8.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, img2img upscale, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), extreme quality, cg, (negative space),
detailed face+eyes, 1girl, fox ears, animal ear fluff, (plants:1.18), (fractal art),
(bright colors), splashes of color background, colors mashing, paint splatter,
complimentary colors, neon, (thunder tiger), compassionate, electric, limited palette,
synthwave, fine art, tan skin, upper body, (green and orange:1.2), time stop, sy3, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4079573538, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, lora; Cool and Stylish(0.45), Bubble Drip(0.45),
Add_detail(0.15), Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l9.png" title="">
<figcaption><i>Untitled - older hires upscale, textural inversion, sourced prompt</i></figcaption>
<small>
```
(nvinkpunk:1.2) (snthwve style:0.8) lion, anthro, lightwave, sunset, intricate,
highly detailed
```
```
cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)),
((extra limbs)), ((close up)), ((b&w)), weird colors, blurry
```
```
Steps: 20, Sampler: Euler a, CFG scale: 9, Seed: 890485019, Size: 768x1024,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l10.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, LoRa, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), extreme quality, cg, (negative space),
detailed face+eyes, 1girl, fox ears, animal ear fluff, (plants:1.18), (fractal art),
(bright colors), splashes of color background, colors mashing, paint splatter, complimentary
colors, neon, (thunder tiger), compassionate, electric, limited palette, synthwave, fine art,
tan skin, upper body, (teal and white:1.2), time stop, colorfantasystyle, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3467711840, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.45),
Color Fantasy(0.55), Add_detail(0.2), Splash_v1.1(0.3), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l11.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, light manual touch-up, LoRa, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), 1girl, mecha, robot, armor, bodysuit,
mechanical arms, mysterious expression, magical, magical effects like sparkles or energy,
flowing robes, mystical background, rim lighting, side lighting, cinematic light,
ultra high res, 8k uhd, film grain, best shadow, delicate, RAW, light particles,
detailed skin texture, detailed cloth texture, beautiful detailed face, intricate details,
ultra detailed, mecha musume, mechanical arms, headgear, bodysuit, (plants:1.3), gold,
luxury, (violet), (looking at viewer), nijimecha, SMM, fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 2446927677, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, LoRa: NijiMecha(0.65), Cool and Stylish(0.35),
Add_detail(0.25), fantasy00d(0.15), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l12.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), extreme quality, cg, (negative space),
detailed face+eyes, 1girl, fox ears, animal ear fluff, (plants:1.18), (fractal art),
(bright colors), splashes of color background, colors mashing, paint splatter,
complimentary colors, neon, (thunder tiger), compassionate, electric, limited palette,
synthwave, fine art, tan skin, upper body, (green and orange:1.2), time stop, sy3, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3438019576, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.45),
Bubble Drip(0.45), Add_detail(0.15), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l13.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, self-prompted</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), extreme quality, cg, (negative space),
detailed face+eyes, 1girl, fox ears, animal ear fluff, (plants:1.18), (fractal art),
(bright colors), splashes of color background, colors mashing, paint splatter,
complimentary colors, neon, (thunder tiger), compassionate, electric, limited palette,
synthwave, fine art, tan skin, upper body, (green and orange:1.2), time stop, sy3, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 4240446306, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.45),
Bubble Drip(0.45), Add_detail(0.15), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l14.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, sourced prompt</i></figcaption>
<small>
```
masterpiece, best quality, 8K, highly detailed, 4k, very long hair, (hair flaps),
(shiny hair), flipped hair, grin, ((monochrome)), yellow eyes, close-up, straw hat,
(shaded face), white sundress, slit pupils, (anime) by WLOP, trending on ArtStation,
bj_Fault art, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3), (fire:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 283841059, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.45), Fault Art(0.55),
FilmVelvia2(0.1), Add_detail(0.15), Niji Default Style_v2(0.2),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l15.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), wallpaper, (highly detailed),
[street, wall:(1girl), (solo), pale skin, [black eyes|red eyes], (hollow eyes), black hair,
long hair, (liquid hair:1.2), floating hair, bangs, expressionless, (black goo:1.4),
(white dress:1.2), (white skirt), white, intricated filigree, (stained clothes:1.2):0.25],
(black goo:1.4), (black dripping), (black splashing:0.85), (tentacles:0.85), shiny,
[:face focus, upper body, (cowboy shot), lateral view, dutch angle, dynamic:0.25],
[white background|black goo], volumetric lighting, (high contrast:0.85),
(limited palette:0.65), colorfantasystyle, SMM
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (cleavage:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 1835800510, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRa: Cool and Stylish(0.45), Color Fantasy(0.55),
Add_detail(0.2), Splash_v1.1(0.3), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/LoRa/l16.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, initially sourced prompt</i></figcaption>
<small>
```
masterpiece, best quality, 1girl, closed eyes, upper body, splashing, abstract, psychedelic,
neon, (honeycomb pattern), (creative:1.3), sy3, SMM, fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry), (nsfw, cleavage:1.3)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 3121169266, Size: 512x640,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, LoRa: Bubble Drip(0.45), Cool and Stylish(0.45),
Add_detail(0.35), fantasy00d(0.25), Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Scenery</summary>
<div align="center">
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s1.png" title="Wayward Insight">
<figcaption><i>Wayward Insight - hires upscale, img2img upscale, self-prompted</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s2.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, photorealistic, (bright colors:0.9), light, elemental,
water magic, blue, water, magical, righteous, (outdoors), masterpiece, 8k,
(tone mapping, hyper focus:0.5), limited palette, (dappled sunlight:0.7),
reflective surface, orange, wholesome, (varied depth of field:0.8),
complimentary colors, particle dust, green embers, (ancient), (pyramid:0.7),
flood, (vegetation), plants, (destiny 2:1.25), (no humans:1.5),
(neon cyber technology:1.27), (architecture:0.8), magical, ruins,
(catacomb:0.7), ripples, granite, detailed texture, flower, marble, vine
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3),
(worst quality:1.3), censorship, ugly, old, deformed, amateur drawing, odd,
fat, cell shading, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, normal quality,
jpeg artifacts, signature, watermark, username, blurry, out of focus,
cell shading, watercolor, (low quality:1.4), asymmetrical eyes, metal,
multicolored hair, red eyeliner, (multicolored hair:1.5), off center,
dragon horns, bull horns, goat horns, single horn, pointy ears,
(tanlines:1.5), lowres, error, bad anatomy, bad hands, missing fingers,
extra digit, fewer digits, (cropped:1.2), watermark, username, (signature:1.4),
(text), (author), blurry, out of focus, deformed, amateur drawing, long neck,
extra fingers, by bad-artist, missing fingers, image sample, jpeg artifacts,
gif artifacts, wallpaper forced, lossy-lossless, lossless-lossy, corrupted file,
duplicate, redrawn, screenshot, game screenshot, bad art, amateur drawing, odd,
((merged limbs)), ((conjoined limbs)), (poorly drawn:1.3), poorly drawn hands,
poorly drawn face, deformities, conjoined, stretched torso, (heterochromia),
cel shading, (disproportioned), bad face, (bad details), sloppy, (underwater,
gun, weapon, tall, metal, (city), blurry foreground
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 58025803,
Size: 512x768, Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44,
Clip skip: 2, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B,
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s3.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, (colorful), (a realistic photo of a mountain
aesthetic scene:1.2), visually appealing, ,, (varied depth of field:0.76),
,, masterpiece, 8k, tone mapping, hyper focus, indigo, limited palette,
Ufotable aesthetic, (clarity), (Diarmuid Byron O'Connor), (smug:0.8),
(no humans:1.6), picturesque scenery, landscape, plant, horizon, sky, ,,
epic, nature
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres,
bad anatomy, bad hands, text, error, missing fingers, extra digit,
fewer digits, cropped, worst quality, low quality, normal quality,
jpeg artifacts, signature, watermark, username, blurry, out of focus,
cell shading, watercolor, (worst quality, low quality:1.4), (humans:1.5),
(girl:1.5), (1girl:1.6), (1boy:1.5), (creature:1.5)
```
```
Steps: 20, Sampler: DDIM, CFG scale: 13, Seed: 2983311593, Size: 960x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36,
Clip skip: 2, Hires upscale: 2, Hires steps: 18,
Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s4.png" title="Zen Garden">
<figcaption><i>Zen Garden - older hires upscale, sourced prompt</i></figcaption>
<small>
```
photo of a beautiful zen garden in the moutains, golden ratio,
cinematic lighting, intricate details, 8k detail post processing,
hyperealistic, professional photograph, soft focus, f2.8, postprocessing
```
```
3d, digital art, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts,signature, watermark, username,
blurry, artist name, pattern, patterns, black and white
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10.5, Seed: 1997751605,
Size: 640x960, Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s5.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
realistic photo of reflective marble flooring, nature, luxury, (anime throne),
plants, supreme, pillar, extreme quality, masterpiece, 8k, depth of field,
intricate details, mirrorless
```
```
censorship, ugly, old, deformed, amateur drawing, odd, fat, lowres, bad anatomy,
bad hands, error, missing fingers, extra digit, fewer digits, cropped,
worst quality, low quality, normal quality, jpeg artifacts, signature,
watermark, username, ((blurry)), ((out of focus)), watercolor,
(worst quality, low quality:1.4), (seeds), grain, (hand)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7.6, Seed: 854718657,
Size: 1024x1024, Denoising strength: 0.58, First pass size: 0x0
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s6.png" title="Cloud">
<figcaption><i>Cloud - method, prompt source</i></figcaption>
<small>
```
extreme quality, cg, solo, (colorful:0.7),
(a photo of a futuristic aesthetic scene:1.2), visually appealing, depth of field,
fog, masterpiece, 8k, tone mapping, hyper focus, white, (limited palette:0.85),
(clarity), (by Jarrod Castaing:1.45), curious, picturesque scenery, landscape,
above clouds,sand, epic, nature, adorable, (anime wallpaper), wallpaper engine,
photorealistic, dappled sunlight, (alone)
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped,
worst quality, low quality, normal quality, jpeg artifacts, signature, watermark,
username, blurry, out of focus, cell shading, watercolor,
(worst quality, low quality:1.4), (1boy:1.5), (creature:1.5), (ufo:1.5), (halo:1.5),
(engine), stretched image, (book), (reading), deformed body, deformed figure,
mutated body, mutated legs
```
```
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 97890808,
Size: 832x512, Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58,
Clip skip: 4, Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s7.png" title="Foresight">
<figcaption><i>Foresight - hires upscale, img2img upscale, LoRa, initial sourced prompt</i></figcaption>
<small>
```
extreme quality, cg, (bright colors:0.8), A up close photo of of the backside of a
woman standing on a cliff overlooking a vast, serene lake. She is looking away from
the camera out at the sunset. The mountains in the distance are reflected in the water,
and the golden hues of the setting sun paint the sky in breathtaking colors.,
masterpiece, best quality, 8k, 111cine8matic55
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry), (cropped), (nsfw:1.6), (shirtless:1.3)
```
```
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 1825374680, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, LoRA: CinematicStyle(0.65), Add_Detail(0.15),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s8.png" title="">
<figcaption><i>Untitled - hires upscale, LoRa, initial sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0), (cinematic masterpiece),
(cinematic spotlight), ((caustic)), ultra wide shot, super detail, cinematic lighting,
HDR, impressive, ultra resolution photo of an imaginative and otherworldly scene of an
ocean filled with planets, stars, and nebulas, hyperrealistic surrealism, award winning
masterpiece with incredible details, epic stunning, (natural skin texture, hyperrealism,
soft light, sharp), fantasy00d
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, text, (blurry)
```
```
Steps: 25, Sampler: Euler a, CFG scale: 7, Seed: 1074882576, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires upscaler: 4x-UltraSharp, LoRA: Add_Detail(0.35), Fantasy00d(0.3),
Discard penultimate sigma: True, Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s9.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, solo, (colorful:0.85), (a photo of a jungle aesthetic scene:1.2),
visually appealing, depth of field, ,, masterpiece, 8k, tone mapping, hyper focus, black,
(limited palette:0.85), (clarity), (by Mala Breuer:1.45), longing, picturesque scenery,
landscape, coast,island,sky, epic, nature, adorable, (anime wallpaper), wallpaper engine,
photorealistic, dappled sunlight, (alone), mood lighting, best shadow, high fantasy
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,
out of focus, cell shading, watercolor, (worst quality, low quality:1.4), (1boy:1.5),
(creature:1.5), (ufo:1.5), (halo:1.5), (engine), stretched image, (book), (reading),
deformed body, deformed figure, mutated body, mutated legs
```
```
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2919981875, Size: 832x512,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Clip skip: 3,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s10.png" title="">
<figcaption><i>Untitled - hires upscale, initially sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.3), Architectural Digest photo of a
maximalist blue (vaporwave/steampunk/solarpunk) living room with lots of flowers and
plants, golden light, hyperrealistic surrealism, award winning masterpiece with
incredible details, epic stunning, (bedroom aesthetic)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, ((blurry),) (cropped), ((out of focus)), watercolor, ugly, grain
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 1756742325, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s11.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, light manual touch-up, initial sourced prompt</i></figcaption>
<small>
```
night, scenery, (mountanious_horizon), horizon, sunset, city, river, city faraway, sky,
cloudy_sky, night, dark, starry_sky, fantasy, fantasy_city, fantasy world, (((medieval))),
((mountain, mountains))
```
```
(worst quality, low quality:1.4), bad anatomy, extra fingers, extra hand, crooked fingers,
badly sized fingers, cropped
```
```
Steps: 21, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 502209660, Size: 832x512,
Denoising strength: 0.58, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s12.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, photorealistic, (bright colors:0.9), desert oasis scene, sand,
(green oasis:1.3), (oasis water:1.34), outdoors, masterpiece, 8k, (tone mapping,
hyper focus:0.5), limited palette, red, scared, (varied depth of field:0.8),
complimentary colors, particle dust, (ancient), (pyramid:0.8), loose desert,
(destiny 2:1.25), (no humans:1.5), (neon cyber technology:1.27), (architecture:0.8),
magical, ruins, (catacomb:0.7), activation
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
censorship, ugly, old, deformed, amateur drawing, odd, fat, cell shading, lowres,
bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits,
cropped, normal quality, jpeg artifacts, signature, watermark, username, blurry,
out of focus, cell shading, watercolor, (low quality:1.4), asymmetrical eyes,
metal, multicolored hair, red eyeliner, (multicolored hair:1.5), off center,
dragon horns, bull horns, goat horns, single horn, pointy ears, (tanlines:1.5),
lowres, error, bad anatomy, bad hands, missing fingers, extra digit, fewer digits,
(cropped:1.2), watermark, username, (signature:1.4), (text), (author), blurry,
out of focus, deformed, amateur drawing, long neck, extra fingers, by bad-artist,
missing fingers, image sample, jpeg artifacts, gif artifacts, wallpaper forced,
lossy-lossless, lossless-lossy, corrupted file, duplicate, redrawn, screenshot,
game screenshot, bad art, amateur drawing, odd, ((merged limbs)),
((conjoined limbs)), (poorly drawn:1.3), poorly drawn hands, poorly drawn face,
deformities, conjoined, stretched torso, (heterochromia), cel shading, (disproportioned),
bad face, (bad details), sloppy, (underwater, gun, weapon, tall, metal, (city),
blurry foreground
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 1266378264, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Clip skip: 2,
Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s13.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, (colorful), (a realistic photo of a jungle aesthetic scene:1.2),
visually appealing, ,, (varied depth of field:0.76), fog, masterpiece, 8k, tone mapping,
hyper focus, magenta, limited palette, Ufotable aesthetic, (clarity), (Jef Murray),
(loving:0.8), (no humans:1.6), picturesque scenery, landscape, plant, horizon, sky, ,,
epic, nature
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,
out of focus, cell shading, watercolor, (worst quality, low quality:1.4), (humans:1.5),
(girl:1.5), (1girl:1.6), (1boy:1.5), (creature:1.5)
```
```
Steps: 20, Sampler: DDIM, CFG scale: 13, Seed: 680413774, Size: 960x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.36, Hires upscale: 2,
Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s14.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, solo, (colorful:0.85), (a photo of a ancient chinese aesthetic
scene:1.2), visually appealing, depth of field, ,, masterpiece, 8k, tone mapping,
hyper focus, violet, (limited palette:0.85), (clarity), (by Otto Mengelberg:1.45),
begging, picturesque scenery, landscape, sand,desert, epic, nature, adorable,
(anime wallpaper), wallpaper engine, photorealistic, dappled sunlight, (alone),
mood lighting, best shadow, high fantasy
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped,
worst quality, low quality, normal quality, jpeg artifacts, signature, watermark,
username, blurry, out of focus, cell shading, watercolor, (worst quality, low quality:1.4),
(1boy:1.5), (creature:1.5), (ufo:1.5), (halo:1.5), (engine), stretched image, (book),
(reading), deformed body, deformed figure, mutated body, mutated legs
```
```
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2430179145, Size: 832x512,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Clip skip: 4,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s15.png" title="">
<figcaption><i>Untitled - hires upscale, img2img upscale, self-prompted</i></figcaption>
<small>
```
no metadata
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s16.png" title="">
<figcaption><i>Untitled - hires upscale, majority sourced prompt</i></figcaption>
<small>
```
(masterpiece:1.0), (highest quality:1.12), (HDR:1.0), (dark shot:1.22), old, (RAW photo),
water, trending on ArtStation, alien landscape and vegetation, adhesives, middle ground,
(tilt shift photography:1.2)
```
```
EasyNegative, (badv2:0.8), (badhandv4:1.18), (bad quality:1.3), (worst quality:1.3),
watermark, (blurry)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1498514055, Size: 512x768,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.44, Hires upscale: 2,
Hires upscaler: R-ESRGAN 4x+ Anime6B, Discard penultimate sigma: True,
Version: v1.0.0-pre-1578-g394ffa7b
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s17.png" title="">
<figcaption><i>Untitled - hires upscale, sourced prompt</i></figcaption>
<small>
```
masterpiece, best quality, wide shot of autumn forest scenery, sunset, sunbeams
```
```
(worst quality, low quality:1.4), pixelated, film grain
```
```
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 1275953479, Size: 768x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Clip skip: 3,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B,
Discard penultimate sigma: True
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s18.png" title="">
<figcaption><i>Untitled - no upcscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, rural prairie, tall grass, mountain, horizon, dappled sunlight,
bush, 8k, (no humans:1.5)
```
```
badv2, (worst quality, low quality:1.4)
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7.6, Seed: 3318385742, Size: 768x768
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s19.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, (colorful:0.7), (a realistic photo of a winter aesthetic scene:1.2),
visually appealing, depth of field, ,, masterpiece, 8k, tone mapping, hyper focus, red,
(limited palette:0.85), Ufotable aesthetic, (clarity), (by Robin Wood:1.45), serious,
(no humans:1.6), picturesque scenery, landscape, plant, horizon, sky, water, epic, nature
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,
out of focus, cell shading, watercolor, (worst quality, low quality:1.4), (humans:1.5),
(girl:1.5), (1girl:1.6), (1boy:1.5), (creature:1.5), (ufo:1.5)
```
```
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.6, Seed: 896865056, Size: 960x576,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Clip skip: 2,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s20.png" title="">
<figcaption><i>Untitled - hires upscale, self-prompted</i></figcaption>
<small>
```
extreme quality, cg, solo, (colorful:0.85), (a photo of a tribal aesthetic scene:1.2),
visually appealing, depth of field, petals, masterpiece, 8k, tone mapping, hyper focus,
yellow, (limited palette:0.85), (clarity), (by Tinus van Doorn:1.45), crying,
picturesque scenery, landscape, desert,plant, epic, nature, adorable, (anime wallpaper),
wallpaper engine, photorealistic, dappled sunlight, (alone), mood lighting, best shadow,
high fantasy
```
```
nsfw, censorship, deformed, amateur drawing, odd, cell shading, lowres, bad anatomy,
bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,
out of focus, cell shading, watercolor, (worst quality, low quality:1.4), (1boy:1.5),
(creature:1.5), (ufo:1.5), (halo:1.5), (engine), stretched image, (book), (reading),
deformed body, deformed figure, mutated body, mutated legs
```
```
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 680247863, Size: 832x512,
Model hash: 1504f30200, Model: SuperMix1, Denoising strength: 0.58, Clip skip: 4,
Hires upscale: 2, Hires steps: 18, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
</small>
<img style="margin-top: 7%;" src="https://huggingface.co/Ocean3/SuperMix/resolve/main/2)%20Previews/Scenery/s21.png" title="">
<figcaption><i>Untitled - older hires upscale, self-prompted</i></figcaption>
<small>
```
realistic photo of a (seedling emerging through dirtL1.3), extreme quality, masterpiece,
8k, depth of field, intricate details, Zhang Kechun
```
```
censorship, ugly, old, deformed, amateur drawing, odd, fat, lowres, bad anatomy, bad hands,
error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality,
normal quality, jpeg artifacts, signature, watermark, username, ((blurry)),
((out of focus)), watercolor, (worst quality, low quality:1.4), (seeds), grain
```
```
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7.6, Seed: 2434833613, Size: 1024x1024,
Denoising strength: 0.58, First pass size: 0x0
```
</small>
</div></details>
---
<div align="center" style="margin-top: -4%;"><a href="https://huggingface.co/Ocean3/SuperMix/tree/main/Previews" target="_blank">View More</a></div>
<div align="center"><p style="font-size:90%; background-color:#f5f6ff; color:#173978;">Note</p></div>
<p style="font-size:90%;">SuperMix1 was originally merged and tested on a much <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/4b3c5bc24bffdf429c463a465763b3077fe55eb8">older version</a> of Automatic1111 WebUI.
Due to this, I suggest enabling -> <i>settings/compatibility/</i><b>use old karras scheduler sigmas (0.1 to 10)</b> compatability setting when using karras samplers or are trying to recreate some example images.
This is completely optional and shouldn't be needed - I have not personally tested enough with this setting turned off on the newer webUI versions.</p>
---
# General Use
<img src="https://huggingface.co/Ocean3/SuperMix/resolve/main/img/img3.png" title=general-use>
This model is fairly versatile when it comes to a general use configuration and parameters.
<br>In short, I would suggest starting simple and experiment with what works best for you and your prompt at the time. Perhaps try some older prompts and configurations, an example, or start from scratch and go from there.
SuperMix1 really shines with a good prompt. You may experience some messy anatomy/hands until you find a good configuration + prompt, you'll know when you do. Keep in mind this model is geared more toward portrait style generations.
There are many different examples of various configurations used in the <a href="./SuperMix#previews">Previews</a> section and <a href="https://civitai.com/models/89213?modelVersionId=94946" target="_blank">CivitAI</a> pages - feel free to explore your own styles.
An additional img2img upscale at lower denoising values can do really well and really bring a clean polish to output images. Keep in mind you may lose some very fine detailing depending on your parameters, though you can also merge two upscales together for the best of both ranges.
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Prompts</summary>
<div style="margin-top: 7%;">
SuperMix can excel with both simple and complex prompt styles. Start simple and small, then expand from there. 👑 Prompts are king in my opinion when it comes to one of the largest factors in a generation. Be keen about what you're using and how you're using it; what may conflict with something else; and how everything plays together with other parameter factors.
<br><i>(ie sampler, steps, scale, clip skip, seed, lora, etc.</i>
<br><br><b>Note:</b> artist tokens can hold a lot of weight in outputs, use at your own discretion.
* **Positive Prompts:** Simple goes a long way as a starting point but you can really direct the model style-wise with some added structure. Try anything you find that works well with your other parameters. Here are a few starting points.
<small>
```
(masterpiece:1.1), (highest quality:1.1), (HDR:1.0)
```
```
extreme quality, cg, detailed face+eyes, (colorful:0.8), <content>, masterpiece, 8k,
tone mapping, hyper focus
```
</small>
* **Negative Prompts:** This model can do well with a simple negative prompt, a negative embedding(s), but can also do really well with some structure to the negative prompt as far as styling direction, undesired quality, etc. Keep in mind conflicting tokens with your positive prompt and otherwise and maybe not too too complex, but try anything that works!
<small>
```
(bad quality:1.3), (worst quality:1.3)
```
```
EasyNegative, (bad_prompt_version2:0.8), (badhandv4:1.18), (bad quality:1.3),
(worst quality:1.3), watermark, (blurry), (cropped), (nsfw:1.3), (cleavage:1.3)
```
</small>
You can check the <a href="./SuperMix#previews">Previews</a> for more examples.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Hires Fix</summary>
<div style="margin-top: 7%;">
* **Hires Denoising:** I tend to do a range between **~0.3-0.6**, I haven't really tried much else so far though. Experiment to see what works best for your parameters and prompts at the time.
* **Hires Upscaler:** Upscalers seem to produce slightly different results between themselves - though I find any of them seem to work. I'm not sure what is typically used, though I mainly use **R-ESRGAN 4x+ Anime6B** or **4x-UltraSharp**. Use what you think is best as always.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Sampling Steps</summary>
<div style="margin-top: 7%;">
I suggest starting with **~18-30** step values, you can go lower or higher and see what works well with your prompt, sampler and other parameters.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Samplers</summary>
<div style="margin-top: 7%;">
Most of my tests with this model were using samplers:
* **Eular a**
* **DPM++ 2M Karras**
* **DPM++ SDE Karras**
* **DDIM**
I also tried a bit of **DPM++ 2S a Karras**, and **PLMS** samplers.
<br>I am unsure on the rest. Each sampler has their own styling and play differently with your prompt and other parameters at the time.
<br><br>I suggest trying out what you typically use, then try out some of the others and see how they play with your other configurations and prompt.
<br><br>Do note that some samplers may make use of certain terms/tokens of your prompt and other parameters differently than others. You may find better results with one sampler and "prompt a", then better results with another sampler and "prompt b" etc.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Scale</summary>
<div style="margin-top: 7%;">
**CFG Scale** may largely be dependent on your prompt, sampler, etc. Though, I generally suggest starting at default **7** and adjusting from there -> **~6.5-10**
<br><br>I have had good results with higher scales **~13-16** on samplers such as DDIM for example, depending on the prompt and other factors used. This is not to say a lower value does not work as well also. The same can be said for other samplers and value ranges.
<br>Experiment and see what works best for you and your prompt 👍</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Clip Skip</summary>
<div style="margin-top: 7%;">
* **Clip Skip 1** - great with most samplers, especially Euler a in my experience.
* **Clip Skip 2** - also great with most samplers, tends to be more 'literal' with various tokens in a prompt depending on sampler and other parameters.
<br><br>Both work great and will each produce different styles and results - this is part of the reason I didn't go with some of the other test model variations due to the imbalance of quality between the two clip skip variations. I suggest trying both or even together in the same generation with the built-in X/Y/Z plot script.
<br><br>You can always try higher as well, I have seen some good results with **Clip Skip 3-6**.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">VAE</summary>
<div style="margin-top: 7%;">
Use any VAE you prefer. I typically use **vae-ft-ema-560000-ema**.
* **"SuperMix_A.vae"** (renamed SD vae-ft-ema-560000-ema.vae)
<br>Recommended - bright vivid/bold colors
* **"SuperMix_B.vae"** (renamed kl-f8-anime2.vae)
<br>Very Similar - different details at times
* **"SuperMix_C.vae"** (renamed Anything_v3.vae)
<br>Another option - moderate colors/saturation in comparison
<br>**vae-ft-mse-840000-ema** and [ClearVAE_V2.3](https://civitai.com/models/22354/clearvae_) can also be good options.
<br><br>**Note:** model names containing "-bv" or "-bakedVAE" include VAE files baked-in making the use of these files no longer needed.</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Upscaling</summary>
<div style="margin-top: 7%;">
A secondary img2img upscaling after generation can really bring out clarity in images and iron out details with this model. Keep in mind this can also soften some texturing detail depending on your settings. This is not needed of course, but can really sharpen up some generations. Use the settings or extension(s) that work best for you.
<br><br>I generally use the built in SD upscale script with:
* the same **base model**
* the **same or similar** prompt
* **DPM++ SDE Karras** sampler
* **20** sampling steps
* **7** cfg scale
* a low denoising strength **~0.08-0.3**
* a random seed, **-1**
* tile overlap **~176-208**
* scale factor **x2**
* upscaler **R-ESRGAN 4x+ Anime6B** or **4x-UltraSharp**
* loRa usually turned off</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">ENSD & Eta</summary>
<div style="margin-top: 7%;">
I've only used the webUI defaults:
* **0** Eta noise seed delta
* **0** Eta for DDIM (noise multiplier)
* **1** Eta for ancestral samplers (noise multiplier)</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Other Settings</summary>
<div style="margin-top: 7%;">
For the example images I used -> <i>settings/compatibility/</i><b>use old karras scheduler sigmas (0.1 to 10)</b> compatability setting which effects karras samplers.
This is completely optional and shouldn't be needed. This setting better replicates some of the older webUI versions. I have not personally tested enough with this setting turned off on the newer webUI versions.</div></details>
---
<div align="center"><p style="font-size:90%; background-color:#fff0f0; color:#8a0000;">Disclaimer</p></div>
<p style="font-size:90%; margin-top: -5px;">This model(s) may output NSFW content unintentionally depending on parameters used. Make sure you tailor your prompts accordingly. For example "nsfw" in the negative prompt.
<br><br>The purpose of sharing this model is not to showcase obscene material in a public forum. The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create SFW or NSFW content. The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences. The ai model(s) do not contain explicit visual content that can be accessed easily.</p>
---
# Embeddings
I initially hadn't used any negative embeddings or otherwise, but I have tried out a few recently as shown in some of the preview images. Try any you find resonable or none at all 👍.
<br><br>Here are a few **negative embeddings**:
* <a href="https://civitai.com/models/55700/badprompt-negative-embedding">bad_prompt_version2</a> (aka "badv2" in the example images)
* <a href="https://huggingface.co/datasets/gsdf/EasyNegative">EasyNegative</a>
* <a href="https://civitai.com/models/16993?modelVersionId=20068">badhandv4</a>
---
# Recipes
<img src="https://huggingface.co/Ocean3/SuperMix/resolve/main/img/img4.png" title=recipes>
## SuperMix1
| Model | Hash | Weighted Sum |
| ----------- | ----------- | - |
| [AOM2_hard](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors) | 0fc198c490 | start |
| [DreamLike_Diffusion_v1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/tree/main) | 0aecbcfa2c | 20% (.2) |
| [Protogen_x3.4](https://civitai.com/models/3666/protogen-x34-photorealism-official-release) | 61a37adf76 | 15% (.15)|
| [Anything_v3](https://huggingface.co/Linaqruf/anything-v3.0) | 543bcbc212 | 50% (.5) |
| [Dawgsmix_v1](https://civitai.com/models/1585/dawgsmix)| 05135646f0 | 20% (.2) |
| [Trinart_v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2/tree/main) | 776af18775 | 20% (.2) |
| [EimisAnimeDiffusion_v1](https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v/tree/main) | 39ee30561f | 20% (.2) |
| [Healy's Anime Blend_v1.7](https://civitai.com/models/1400/healys-anime-blend) | 8416edf88c | 20% (.2) |
| [8528d-final](https://huggingface.co/ckpt/8528-diffusion/tree/main) | 4a1c4626a9 | 20% (.2) |
| [Anything_v3](https://huggingface.co/Linaqruf/anything-v3.0) | 543bcbc212 | 30% (.3) |
| [AOM2_hard](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors) | 0fc198c490 | 70% (.7) |
| [HassanBlend_v1.4](https://civitai.com/models/1173/hassanblend-1512-and-previous-versions) | eb172d270d | 2.5% (.025) |
| Zeipher-f222 | 9e2c6ceff3 | 2.5% (.025) |
| [StableDiffusion_v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main) | e1441589a6 | 5% (.05) |
| Cleaned and pruned via [Model Toolkit](https://github.com/arenatemp/stable-diffusion-webui-model-toolkit) | 1504f30200 | **SuperMix1** |
| |
<p style="margin-top:-7%;"><div align="center"><figcaption><i>individual model license(s) listed below</i></figcaption></div></p>
---
# Alternate Versions
<img src="https://huggingface.co/Ocean3/SuperMix/resolve/main/img/img5.png" title=alternate-versions>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;"><b>SuperMix1-Pre</b> <small><i>2edb971aa8</i></small></summary>
<div style="margin-top: 7%;">
The pre-start, or first part, of the SuperMix1 mix.
<br>This Model wasn't intended to be a stand alone mix, however acted as a breakpoint while testing further iterations.
This model can currently produce some unique 2d illustration/flatter color lineart styles merged with a paint-like photographic scenery feel. Simple, a bit messy, and a bit aesthetic!
[Download](https://huggingface.co/Ocean3/SuperMix/resolve/main/3\)%20Alternate%20Versions/SuperMix1-Pre.safetensors) | [CivitAI](https://civitai.com/models/107775/supermix-pre-lineart-style)
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;"><b>SuperMix1-Alt1</b> <small><i>bf574ab6e8</i></small></summary>
<div style="margin-top: 7%;">
A minor change in comparison to SuperMix1.
<br>Alt1 uses *Trinart-Derrida* in place of *Trinart2*. Depending on generation, this change can bring out some different results that some may find more pleasing.
I've included this alternate version as another option and personally find both to function quite well.
[Download](https://huggingface.co/Ocean3/SuperMix/resolve/main/3\)%20Alternate%20Versions/SuperMix1-Alt1.safetensors) | [CivitAI](https://civitai.com/models/89213?modelVersionId=115856)
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;"><b>SuperMix1-Alt2</b> <small><i>aa6c524a48</i></small></summary>
<div style="margin-top: 7%;">
Another alternate version to SuperMix1.
<br>Alt2 uses AOM3 in place of AOM2_hard and Anything_v4.5 in place of Anything_v3. This mix also adds in (.2) of both Counterfeit_v2.5 and a random version of Chillout Mix_Ni at the end of the (.2) model addition sequence.
[Download](https://huggingface.co/Ocean3/SuperMix/resolve/main/3\)%20Alternate%20Versions/SuperMix1-Alt2.safetensors) | [CivitAI](https://civitai.com/models/89213?modelVersionId=115892)
</div></details>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;"><b>SuperMix1-RT</b> <small><i>4badc436cb</i></small></summary>
<div style="margin-top: 7%;">
Replacement Test
<br>This version removes [DreamLike_Diffusion_v1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/tree/main) and [Protogen_x3.4](https://civitai.com/models/3666/protogen-x34-photorealism-official-release) models from the initial mix. They were replaced with [RevAnimated_v1.2.2](https://civitai.com/models/7371/rev-animated) and [DreamShaper_v6.3](https://civitai.com/models/4384?modelVersionId=94081) using the same weights respectively.
Doing so removed any licensing restrictions for this version to my knowledge.
<br><br>The overall styling can be very similar to the original model as well as slightly different in some aspects depending on your parameters. This version may still be a good mix in other models to retain the non-modified creativeml-openrail-m licensing.
[Download](https://huggingface.co/Ocean3/SuperMix/resolve/main/1\)%20Versions/SuperMix1-RT/SuperMix1-RT.safetensors) | [CivitAI](https://civitai.com/models/89213?modelVersionId=119269)
</div></details>
---
# Model Comparisons
As a whole, these comparisons are not fully indicative of each model and their differences. Please keep this in mind while viewing these small sample pools. Click to expand.
<div align="center"><p style="background-color:#fffdf5; color:#363636;"><small>🚧</small></p></div>
---
# License & Use
This model is open access and available to all, with a [**modified CreativeML OpenRAIL-M**](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md) license further specifying rights and usage.
<small>1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
<br>2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license.
<br>3. You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully).
<br><br>Please read the full license(s) [Stable Diffusion](https://huggingface.co/spaces/CompVis/stable-diffusion-license) and [Dreamlike Diffusion 1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md).</small>
---
<details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Use Restrictions <small><i>(click to expand)</i></small></summary>
<div style="margin-top: 7%;"></div>
<small>**You agree not to use the Model or Derivatives of the Model:**
<br>- In any way that violates any applicable national, federal, state, local or international law or regulation
<br>- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way
<br>- To generate or disseminate verifiably false information and/or content with the purpose of harming others
<br>- To generate or disseminate personal identifiable information that can be used to harm an individual
<br>- To defame, disparage or otherwise harass others
<br>- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation
<br>- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics
<br>- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm
<br>- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories
<br>- To provide medical advice and medical results interpretation
<br>- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
<br>- To generate NFTs
</small></details>
---
**Terms of use**
<small><br>- You are solely responsible for any legal liability resulting from unethical use of this model(s)
<br>- If you use any of these models for merging, please state what steps you took to do so and clearly indicate where modifications have been made.</small>
<div align="center"><figcaption><i>Note: if you see any conflicts or corrections to be made, please let me know</i></figcaption></div>

|
bwilkie/dqn-SpaceInvadersNoFrameskip-v4
|
bwilkie
| 2023-07-18T21:24:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T21:23:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bwilkie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bwilkie -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bwilkie
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
felipec23/opt-iml-1.3b-finetuned-800
|
felipec23
| 2023-07-18T21:23:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T21:23:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
underactuated/opt-350m_ft_v3
|
underactuated
| 2023-07-18T21:17:34Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T21:16:14Z |
---
tags:
- generated_from_trainer
model-index:
- name: opt-350m_ft_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_v3
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LarryAIDraw/chara_FateLordElMelloi_Olga-MarieAnimusphere_v1
|
LarryAIDraw
| 2023-07-18T21:03:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:49:41Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/110528/olga-marie-animusphere-or-fate-series-lord-el-melloi-ii-sei-no-jikenbo
|
bobobert4/dqn-SpaceInvadersNoFrameskip-v4
|
bobobert4
| 2023-07-18T21:03:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T21:03:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 663.00 +/- 118.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bobobert4 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bobobert4 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bobobert4
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
LarryAIDraw/ShizukaMikazukiV1
|
LarryAIDraw
| 2023-07-18T21:02:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:51:16Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/111089/shizuka-mikazuki-zom-100-bucket-list-of-the-dead
|
LarryAIDraw/AmauAkoPencilDressV1
|
LarryAIDraw
| 2023-07-18T21:02:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:50:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/111222/amau-ako-blue-archive-ark-uniform-ver
|
LarryAIDraw/Mini_Yaemori_V1
|
LarryAIDraw
| 2023-07-18T21:01:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:50:12Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/110626/mini-yaemori-or-rent-a-girlfriend-or-kanokari-or
|
LarryAIDraw/NagasakiSoyo
|
LarryAIDraw
| 2023-07-18T21:01:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:49:19Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/110267/nagasaki-soyo-bang-dream-its-mygo
|
LarryAIDraw/StarRail_Qingque_AP_v1
|
LarryAIDraw
| 2023-07-18T20:59:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:48:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/73792/qingquehonkai-star-rail
|
LarryAIDraw/IchinoseLora-15
|
LarryAIDraw
| 2023-07-18T20:59:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:46:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/111573/ichinose-honami-classroom-of-the-elite-lora
|
LarryAIDraw/Touma_Kazusa_20230716214229
|
LarryAIDraw
| 2023-07-18T20:58:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:46:22Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/111596/touma-kazusawhite-album-2-2-lora
|
robinhad/open_llama_3b_uk
|
robinhad
| 2023-07-18T20:57:15Z | 9 | 0 |
peft
|
[
"peft",
"text-generation",
"uk",
"dataset:robinhad/databricks-dolly-15k-uk",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-18T20:29:33Z |
---
license: apache-2.0
datasets:
- robinhad/databricks-dolly-15k-uk
language:
- uk
library_name: peft
pipeline_tag: text-generation
---
This is a release of Open LLama, tuned for Ukrainian language.
Currently it contains adapter weights, possible subject to change in future.
|
canertol/FastSpec
|
canertol
| 2023-07-18T20:51:25Z | 0 | 0 | null |
[
"arxiv:2006.14147",
"license:apache-2.0",
"region:us"
] | null | 2023-03-16T15:55:39Z |
---
license: apache-2.0
---
GitHub repo: https://github.com/vernamlab/FastSpec
Paper: https://arxiv.org/abs/2006.14147
|
nolanaatama/lzymx
|
nolanaatama
| 2023-07-18T20:44:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-21T22:12:14Z |
---
license: creativeml-openrail-m
---
|
gpt4life/alpagasus-13b
|
gpt4life
| 2023-07-18T20:43:12Z | 6 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2307.08701",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-18T20:27:52Z |
---
inference: false
license: other
---
# AlpaGasus-13B Model Card
## Model Details
This is an **unofficial** implementation of AlpaGasus-13B, which is a chat assistant trained by fine-tuning LLaMA on a Claud-filtered Alpaca dataset with around 5K triplets.
- **Developed by:** [gpt4life](https://github.com/gpt4life)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA-13B](https://huggingface.co/elinas/llama-13b-hf-transformers-4.29).
Please see the original LLaMA [license](https://github.com/facebookresearch/llama/blob/main/LICENSE) before using this model.
### Model Sources
- **Repository:** https://github.com/gpt4life/alpagasus
- **Paper:** https://arxiv.org/pdf/2307.08701.pdf
## Training Details
AlpaGasus-13B is fine-tuned from LLaMA-13B with supervised instruction fine-tuning on the filtered [Alpaca dataset](https://github.com/gpt4life/alpagasus/blob/main/rating/alpaca_filtered_data.json).
|
zpattdev/taxi-v3
|
zpattdev
| 2023-07-18T20:38:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T20:38:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zpattdev/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ailabturkiye/melihkalkan
|
ailabturkiye
| 2023-07-18T20:36:55Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-18T16:54:35Z |
---
license: openrail
language:
- tr
tags:
- music
---
2022 Yılında vefat eden 14 yaşındaki yetenekli oyuncu Melih Kalkan'ın modeli. 2 dakikalık datasetine 250 epoch basılarak yapılmıştır. Kötü amaçla kullanım olmasın diye private alınmıştır.
|
sawradip/openai-whisper-large-v2-LORA-colab
|
sawradip
| 2023-07-18T20:36:15Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T20:36:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Auracle7/XLMRoberta-finetuned-TyDIQA-Ben-Tel
|
Auracle7
| 2023-07-18T20:34:12Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-18T18:11:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: XLMRoberta-finetuned-TyDIQA-Ben-Tel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMRoberta-finetuned-TyDIQA-Ben-Tel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
greenw0lf/wav2vec2-large-xls-r-300m-frisian-cv-8
|
greenw0lf
| 2023-07-18T20:31:38Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T08:43:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-frisian-cv-8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: validation
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.07238251678331667
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: fy-NL
split: test
args: fy-NL
metrics:
- name: Wer
type: wer
value: 0.07103627691862986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-frisian-cv-8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0707
- Wer: 0.0724
And on the test set:
- Wer: 0.0710
## Model description
This model has been developed for my Master's thesis in "Voice Technology" at Rijksuniversiteit Groningen - Campus Fryslân. It corresponds to experiment 6 where
I use as training set all validated data (~ 50 hours) except the test and evaluation sets (~ 4.5 hours each).
The number of training hours adds up to 41 hours of Frisian speech. This varies from experiment 2 because I fine-tune on the 300M/0.3B parameters version of XLS-R.
## Intended uses & limitations
The intended use is for recognizing Frisian speech.
Limitations include no LM rescoring and using version 8.0 of Common Voice instead of 13.0.
## Training and evaluation data
The evaluation split used is the one available in the Common Voice 8.0 Frisian subset. The train split corresponds to all of the validated data except for the recordings found in the evaluation and test splits.
## Training procedure
The script used for training this model can be found in this GitHub repository: [link](https://github.com/greenw0lf/MSc-VT-Thesis/).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 14.7268 | 0.43 | 400 | 8.7389 | 1.0 |
| 5.3377 | 0.86 | 800 | 3.7016 | 1.0 |
| 3.343 | 1.29 | 1200 | 3.0984 | 1.0 |
| 3.0306 | 1.71 | 1600 | 2.9643 | 1.0 |
| 2.9511 | 2.14 | 2000 | 2.9273 | 1.0 |
| 2.9078 | 2.57 | 2400 | 2.8202 | 1.0 |
| 2.4965 | 3.0 | 2800 | 1.3805 | 0.8888 |
| 1.5378 | 3.43 | 3200 | 0.6556 | 0.5720 |
| 1.119 | 3.86 | 3600 | 0.4260 | 0.4077 |
| 0.9159 | 4.29 | 4000 | 0.3457 | 0.3322 |
| 0.8037 | 4.72 | 4400 | 0.2765 | 0.2850 |
| 0.7411 | 5.14 | 4800 | 0.2447 | 0.2473 |
| 0.6767 | 5.57 | 5200 | 0.2176 | 0.2234 |
| 0.6296 | 6.0 | 5600 | 0.1996 | 0.2078 |
| 0.6165 | 6.43 | 6000 | 0.1891 | 0.1977 |
| 0.5856 | 6.86 | 6400 | 0.1763 | 0.1855 |
| 0.5674 | 7.29 | 6800 | 0.1708 | 0.1797 |
| 0.5399 | 7.72 | 7200 | 0.1593 | 0.1694 |
| 0.5195 | 8.15 | 7600 | 0.1551 | 0.1660 |
| 0.4973 | 8.57 | 8000 | 0.1509 | 0.1583 |
| 0.4907 | 9.0 | 8400 | 0.1480 | 0.1525 |
| 0.4681 | 9.43 | 8800 | 0.1389 | 0.1494 |
| 0.4513 | 9.86 | 9200 | 0.1368 | 0.1414 |
| 0.4486 | 10.29 | 9600 | 0.1294 | 0.1390 |
| 0.4381 | 10.72 | 10000 | 0.1262 | 0.1354 |
| 0.443 | 11.15 | 10400 | 0.1234 | 0.1313 |
| 0.4182 | 11.58 | 10800 | 0.1196 | 0.1294 |
| 0.4036 | 12.0 | 11200 | 0.1194 | 0.1259 |
| 0.4027 | 12.43 | 11600 | 0.1170 | 0.1226 |
| 0.4066 | 12.86 | 12000 | 0.1156 | 0.1224 |
| 0.3885 | 13.29 | 12400 | 0.1136 | 0.1174 |
| 0.3859 | 13.72 | 12800 | 0.1121 | 0.1146 |
| 0.3812 | 14.15 | 13200 | 0.1097 | 0.1141 |
| 0.3774 | 14.58 | 13600 | 0.1059 | 0.1130 |
| 0.3678 | 15.01 | 14000 | 0.1058 | 0.1096 |
| 0.3586 | 15.43 | 14400 | 0.1026 | 0.1099 |
| 0.3612 | 15.86 | 14800 | 0.1010 | 0.1076 |
| 0.3626 | 16.29 | 15200 | 0.0993 | 0.1068 |
| 0.353 | 16.72 | 15600 | 0.0974 | 0.1046 |
| 0.3564 | 17.15 | 16000 | 0.0986 | 0.1037 |
| 0.3447 | 17.58 | 16400 | 0.0977 | 0.1041 |
| 0.3454 | 18.01 | 16800 | 0.0945 | 0.1023 |
| 0.3338 | 18.44 | 17200 | 0.0904 | 0.0996 |
| 0.3359 | 18.86 | 17600 | 0.0950 | 0.1002 |
| 0.3179 | 19.29 | 18000 | 0.0911 | 0.0977 |
| 0.3202 | 19.72 | 18400 | 0.0906 | 0.0979 |
| 0.3317 | 20.15 | 18800 | 0.0894 | 0.0963 |
| 0.3187 | 20.58 | 19200 | 0.0878 | 0.0938 |
| 0.3075 | 21.01 | 19600 | 0.0893 | 0.0937 |
| 0.3032 | 21.44 | 20000 | 0.0872 | 0.0923 |
| 0.3048 | 21.86 | 20400 | 0.0848 | 0.0921 |
| 0.3045 | 22.29 | 20800 | 0.0860 | 0.0887 |
| 0.316 | 22.72 | 21200 | 0.0841 | 0.0896 |
| 0.2986 | 23.15 | 21600 | 0.0840 | 0.0876 |
| 0.294 | 23.58 | 22000 | 0.0824 | 0.0862 |
| 0.313 | 24.01 | 22400 | 0.0814 | 0.0855 |
| 0.2864 | 24.44 | 22800 | 0.0816 | 0.0861 |
| 0.2927 | 24.87 | 23200 | 0.0807 | 0.0875 |
| 0.294 | 25.29 | 23600 | 0.0829 | 0.0826 |
| 0.2834 | 25.72 | 24000 | 0.0794 | 0.0823 |
| 0.2852 | 26.15 | 24400 | 0.0781 | 0.0815 |
| 0.2823 | 26.58 | 24800 | 0.0781 | 0.0821 |
| 0.2835 | 27.01 | 25200 | 0.0788 | 0.0826 |
| 0.2763 | 27.44 | 25600 | 0.0789 | 0.0823 |
| 0.2845 | 27.87 | 26000 | 0.0767 | 0.0803 |
| 0.2777 | 28.3 | 26400 | 0.0775 | 0.0809 |
| 0.275 | 28.72 | 26800 | 0.0758 | 0.0794 |
| 0.2707 | 29.15 | 27200 | 0.0745 | 0.0790 |
| 0.2734 | 29.58 | 27600 | 0.0765 | 0.0797 |
| 0.2716 | 30.01 | 28000 | 0.0746 | 0.0780 |
| 0.2626 | 30.44 | 28400 | 0.0756 | 0.0776 |
| 0.2671 | 30.87 | 28800 | 0.0742 | 0.0763 |
| 0.2592 | 31.3 | 29200 | 0.0730 | 0.0771 |
| 0.2685 | 31.73 | 29600 | 0.0733 | 0.0760 |
| 0.2727 | 32.15 | 30000 | 0.0738 | 0.0758 |
| 0.2564 | 32.58 | 30400 | 0.0731 | 0.0763 |
| 0.2528 | 33.01 | 30800 | 0.0730 | 0.0758 |
| 0.2573 | 33.44 | 31200 | 0.0717 | 0.0746 |
| 0.2597 | 33.87 | 31600 | 0.0718 | 0.0760 |
| 0.2511 | 34.3 | 32000 | 0.0737 | 0.0750 |
| 0.2551 | 34.73 | 32400 | 0.0732 | 0.0758 |
| 0.26 | 35.16 | 32800 | 0.0724 | 0.0746 |
| 0.2563 | 35.58 | 33200 | 0.0717 | 0.0730 |
| 0.2559 | 36.01 | 33600 | 0.0707 | 0.0734 |
| 0.2499 | 36.44 | 34000 | 0.0721 | 0.0729 |
| 0.252 | 36.87 | 34400 | 0.0716 | 0.0723 |
| 0.2448 | 37.3 | 34800 | 0.0711 | 0.0725 |
| 0.248 | 37.73 | 35200 | 0.0710 | 0.0727 |
| 0.2568 | 38.16 | 35600 | 0.0710 | 0.0720 |
| 0.2471 | 38.59 | 36000 | 0.0707 | 0.0725 |
| 0.2464 | 39.01 | 36400 | 0.0705 | 0.0719 |
| 0.2477 | 39.44 | 36800 | 0.0706 | 0.0727 |
| 0.2482 | 39.87 | 37200 | 0.0707 | 0.0724 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
jakelcoop/ppo-CartPole-v1
|
jakelcoop
| 2023-07-18T20:20:29Z | 0 | 0 | null |
[
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T20:18:43Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'run_name': 'ppo_250k'
'gym_id': 'CartPole-v1'
'num_envs': 5
'num_steps': 128
'total_timesteps': 250000
'seed': 1
'learning_rate': 0.001
'anneal_lr': True
'torch_deterministic': True
'cuda': True
'capture_video': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'repo_id': 'jakelcoop/ppo-CartPole-v1'
'env_id': 'CartPole-v1'
'batch_size': 640
'minibatch_size': 160}
```
|
ikaro79/distilbert-base-uncased-finetuned-test
|
ikaro79
| 2023-07-18T20:13:02Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-18T20:00:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ikaro79/distilbert-base-uncased-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ikaro79/distilbert-base-uncased-finetuned-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2768
- Validation Loss: 0.2163
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -999, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2768 | 0.2163 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jamesdborin/ct2-int8-flan-xl
|
jamesdborin
| 2023-07-18T20:06:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-18T20:02:42Z |
---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
tags:
- text2text-generation
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 XL
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-XL, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jliu596/dqn-Atari-SpaceInvadersNoFrameskip-v4
|
jliu596
| 2023-07-18T20:05:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T20:01:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 256.00 +/- 169.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jliu596 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jliu596 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jliu596
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 1500),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.05),
('exploration_fraction', 1),
('frame_stack', 4),
('gradient_steps', 2),
('learning_rate', 0.0001),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
T-Systems-onsite/bert-german-dbmdz-uncased-sentence-stsb
|
T-Systems-onsite
| 2023-07-18T20:03:16Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: de
license: mit
---
# bert-german-dbmdz-uncased-sentence-stsb
**This model is outdated!**
The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model.
|
bk6000/Reinforce-CartPole-v1
|
bk6000
| 2023-07-18T20:02:21Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T20:02:12Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
addiekline/luolabdemo
|
addiekline
| 2023-07-18T20:01:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T19:37:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Wyzard1004/Reinforce-CartPoleV1
|
Wyzard1004
| 2023-07-18T19:59:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T03:16:29Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
eerichmond33/sourceformer-epoch10
|
eerichmond33
| 2023-07-18T19:51:57Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T16:27:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v9
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3262
- Accuracy: 0.3995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 70
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7485 | 1.0 | 72 | 2.7852 | 0.4448 |
| 2.6279 | 2.0 | 144 | 2.7832 | 0.4450 |
| 2.5097 | 3.0 | 216 | 2.7988 | 0.4425 |
| 2.3899 | 4.0 | 288 | 2.8203 | 0.4403 |
| 2.2636 | 5.0 | 360 | 2.8594 | 0.4366 |
| 2.1351 | 6.0 | 432 | 2.9141 | 0.4307 |
| 1.99 | 7.0 | 504 | 2.9844 | 0.4244 |
| 1.8299 | 8.0 | 576 | 3.0723 | 0.4173 |
| 1.6524 | 9.0 | 648 | 3.1855 | 0.4087 |
| 1.4676 | 10.0 | 720 | 3.3262 | 0.3995 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Minggu/sarahviloid2
|
Minggu
| 2023-07-18T19:51:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T19:44:23Z |
---
license: creativeml-openrail-m
---
|
jamesdborin/ct2-int8-redpajama-7b-chat
|
jamesdborin
| 2023-07-18T19:50:27Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:OpenAssistant/oasst1",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:42:45Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
widget:
- text: "<human>: Write an email to my friends inviting them to come to my home on Friday for a dinner party, bring their own food to share.\n<bot>:"
example_title: "Email Writing"
- text: "<human>: Create a list of things to do in San Francisco\n<bot>:"
example_title: "Brainstorming"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# RedPajama-INCITE-7B-Chat
RedPajama-INCITE-7B-Chat was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
To prompt the chat model, use the following format:
```
<human>: [Instruction]
<bot>:
```
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, mathematician, and theoretical biologist.
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.bfloat16)
# infer
prompt = "<human>: Who is Alan Turing?\n<bot>:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Alan Mathison Turing, OBE, FRS, (23 June 1912 – 7 June 1954) was an English computer scientist, mathematician, logician, cryptanalyst, philosopher, and theoretical biologist.
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-7B-Chat` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-7B-Chat` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-7B-Chat`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 79M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
crumbly/gpt2-linear-xl
|
crumbly
| 2023-07-18T19:48:55Z | 153 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2l",
"text-generation",
"gpt2",
"exbert",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-17T14:19:27Z |
---
license: mit
language:
- en
tags:
- gpt2
- exbert
inference: false
---
# GPT2-Linear-XL
A conversion of [gpt2-xl](https://hf.co/gpt2-xl) that uses linear layers instead of convolutional layers. This is not an official OpenAI project.
> Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
> GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
> More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
> This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
- Main model: [crumbly/gpt2-linear-xl](https://hf.co/crumbly/gpt2-linear-xl)
- Sharded model: [crumbly/gpt2-linear-xl-sharded](https://hf.co/crumbly/gpt2-linear-xl-sharded)
- Sharded + Brain-float 16bit model: [crumbly/gpt2-linear-xl-sharded-bf16](https://hf.co/crumbly/gpt2-linear-xl-sharded-bf16)
Config:
```
{
"n_embd": 1600,
"n_head": 25,
"n_layer": 48,
"n_positions": 1024,
}
```
### Usage
Inference on GPU with 4-bit quantization:
```
%pip install -qq transformers accelerate bitsandbytes
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch
model_id = "crumbly/gpt2-linear-xl-sharded-bf16"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
device_map={"":0},
quantization_config=bnb_config
)
```
```python
inputs = tokenizer("Once upon a time,", return_tensors='pt')
inputs = {
k:v.cuda() for k,v in inputs.items()
}
outputs = model.generate(
**inputs,
max_new_tokens=32,
temperature=0.7,
do_sample=True
)
tokenizer.decode(outputs[0])
```
TODO
- ~~test to see if model works with .from_pretrained~~ <br>
- ~~test fp32, fp16, 8 and 4 bit~~
- ~~shard model to max 1gb for use in even lower vram settings~~ <br>
- safetensors <br>
- ~~upload bf16 version of model~~ <br>
- upload 8bit model and 4bit model <br>
- ~~convert other base gpt2 models~~
- open orca QLoRA on XL
- ReLoRA continued pretraining on RefinedWeb or RedPajama to reach 1T tokens
|
Devops-hestabit/otherhalf-2.7b-onnx
|
Devops-hestabit
| 2023-07-18T19:48:46Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"gpt_neo",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T18:40:26Z |
---
license: creativeml-openrail-m
---
|
Jinmane/ambientmix
|
Jinmane
| 2023-07-18T19:41:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T19:36:35Z |
---
license: creativeml-openrail-m
---
|
jamesdborin/ct2-int8-redpajama-7b-instruct
|
jamesdborin
| 2023-07-18T19:40:16Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:togethercomputer/RedPajama-Data-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:32:36Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
widget:
- text: |-
Label the tweets as either 'positive', 'negative', 'mixed', or 'neutral':
Tweet: I can say that there isn't anything I would change.
Label: positive
Tweet: I'm not sure about this.
Label: neutral
Tweet: I liked some parts but I didn't like other parts.
Label: mixed
Tweet: I think the background image could have been better.
Label: negative
Tweet: I really like it.
Label:
example_title: Sentiment Analysis
- text: |-
Please answer the following question:
Question: What is the capital of Canada?
Answer: Ottawa
Question: What is the currency of Switzerland?
Answer: Swiss franc
Question: In which country is Wisconsin located?
Answer:
example_title: Question Answering
- text: >-
Given a news article, classify its topic.
Possible labels: 1. World 2. Sports 3. Business 4. Sci/Tech
Article: A nearby star thought to harbor comets and asteroids now appears to
be home to planets, too.
Label: Sci/Tech
Article: Soaring crude prices plus worries about the economy and the outlook
for earnings are expected to hang over the stock market next week during the
depth of the summer doldrums.
Label: Business
Article: Murtagh a stickler for success Northeastern field hockey coach
Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to
detract from a team that has been the America East champion for the past
three years and has been to the NCAA tournament 13 times.
Label::
example_title: Topic Classification
- text: |-
Paraphrase the given sentence into a different sentence.
Input: Can you recommend some upscale restaurants in New York?
Output: What upscale restaurants do you recommend in New York?
Input: What are the famous places we should not miss in Paris?
Output: Recommend some of the best places to visit in Paris?
Input: Could you recommend some hotels that have cheap price in Zurich?
Output:
example_title: Paraphrasing
- text: >-
Given a review from Amazon's food products, the task is to generate a short
summary of the given review in the input.
Input: I have bought several of the Vitality canned dog food products and
have found them all to be of good quality. The product looks more like a
stew than a processed meat and it smells better. My Labrador is finicky and
she appreciates this product better than most.
Output: Good Quality Dog Food
Input: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were
actually small sized unsalted. Not sure if this was an error or if the
vendor intended to represent the product as 'Jumbo'.
Output: Not as Advertised
Input: My toddler loves this game to a point where he asks for it. That's a
big thing for me. Secondly, no glitching unlike one of their competitors
(PlayShifu). Any tech I don’t have to reach out to support for help is a
good tech for me. I even enjoy some of the games and activities in this.
Overall, this is a product that shows that the developers took their time
and made sure people would not be asking for refund. I’ve become bias
regarding this product and honestly I look forward to buying more of this
company’s stuff. Please keep up the great work.
Output:
example_title: Text Summarization
- text: |-
Identify which sense of a word is meant in a given context.
Context: The river overflowed the bank.
Word: bank
Sense: river bank
Context: A mouse takes much more room than a trackball.
Word: mouse
Sense: computer mouse
Context: The bank will not be accepting cash on Saturdays.
Word: bank
Sense: commercial (finance) banks
Context: Bill killed the project
Word: kill
Sense:
example_title: Word Sense Disambiguation
- text: >-
Given a pair of sentences, choose whether the two sentences agree
(entailment)/disagree (contradiction) with each other.
Possible labels: 1. entailment 2. contradiction
Sentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was
dressed in winter clothes.
Label: entailment
Sentence 1: The boy skated down the staircase railing. Sentence 2: The boy
is a newbie skater.
Label: contradiction
Sentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A
couple riding in a golf cart.
Label:
example_title: Natural Language Inference
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# RedPajama-INCITE-7B-Instruct
RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", torch_dtype=torch.bfloat16)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
RedPajama-INCITE-7B-Instruct is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
RedPajama-INCITE-7B-Instruct is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
RedPajama-INCITE-7B-Instruct, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 1B tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
eluzhnica/mpt-7b-8k-instruct-peft-compatible
|
eluzhnica
| 2023-07-18T19:38:34Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-18T15:14:58Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Instruct-8k
MPT-7B-Instruct-8K but with gradient checkpointing making it easy to train with LoRA/QLoRA. Not tested yet
Original card below:
MPT-7B-Instruct-8k is a model for long-form instruction following, especially question-answering on and summarization of longer documents.
It is built by finetuning [MPT-7B-8k](https://huggingface.co/mosaicml/mpt-7b-8k) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
This is the same dataset that [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct) was trained on.
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
July 18, 2023
## Model License
_CC-By-SA-3.0_
## Documentation
* [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct-8k',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct-8k'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.66% |
| cot_gsm8k | 3.36 M | 7.67% |
| dialogsum | 0.1 M | 0.23% |
| dolly_hhrlhf | 5.89 M | 13.43% |
| duorc | 7.8 M | 17.80% |
| qasper | 8.72 M | 19.90% |
| quality | 11.29 M | 25.78% |
| scrolls/summ_screen_fd | 4.97 M | 11.33% |
| spider | 0.089 M | 0.20% |
### Training Configuration
This model was trained on 8 80GB A100s for about 6.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct-8k was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by the MosaicML NLP team.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
Arikkod/PPO-LunarLander-v2
|
Arikkod
| 2023-07-18T19:38:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T20:10:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.51 +/- 18.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jlevin/llama_7b_hhrf_qlora
|
jlevin
| 2023-07-18T19:36:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-18T19:31:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jamesdborin/ct2-int8-redpajama-7b-base
|
jamesdborin
| 2023-07-18T19:30:07Z | 5 | 0 |
transformers
|
[
"transformers",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:22:09Z |
---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
---
# RedPajama-INCITE-7B-Base
RedPajama-INCITE-7B-Base was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
widely considered to be the father of modern computer science and artificial intelligence. He was a brilliant mathematician and cryptographer, who worked for the British government during World War II. He was instrumental in breaking the German Enigma code, and is credited with helping to shorten the war by two years...
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a very well-known name in the world of computer science. It is named after the mathematician Alan Turing. He is famous for his work on the Enigma machine, which was used by the Germans during World War II....
"""```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", torch_dtype=torch.bfloat16)
# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
one of the most important figures in the history of computing. He is best known for his work on the development of the modern computer and for his code-breaking work during World War II. He was also a brilliant mathematician and philosopher.
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
`RedPajama-INCITE-7B-Base` is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
`RedPajama-INCITE-7B-Base` is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
`RedPajama-INCITE-7B-Base`, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 512 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster
- **Optimizer:** Apex FusedAdam
- **Parallelism:** Pipeline parallel 12, tensor parallel 2
- **Gradient Accumulations**: 8 (global batch size 4M tokens)
- **Num of Tokens:** 1.001T Tokens
- **Learning rate:** 0.00012
## Benchmark
Please refer to our [blog post](https://together.xyz) for benchmark results.
## Intermediate Checkpoints
We provide 11 intermediate checkpoints that have been released for study.
The checkpoints are organized based on the number of tokens they contain, ranging from 240 billion tokens to 1 trillion tokens.
- [240b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/240b_tokens)
- [280b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/280b_tokens)
- [400b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/400b_tokens)
- [440b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/440b_tokens)
- [500b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/500b_tokens)
- [600b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/600b_tokens)
- [700b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/700b_tokens)
- [720b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/720b_tokens)
- [960b_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/960b_tokens)
- [1t_tokens](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/1t_tokens)
- [latest](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base/tree/main)
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|
4bit/Llama-2-7b-Chat-GPTQ
|
4bit
| 2023-07-18T19:28:17Z | 20 | 10 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:26:31Z |
---
extra_gated_button_content: Submit
extra_gated_description: This is a form to enable access to Llama 2 on Hugging Face
after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this form.
Requests will be processed in 1-2 days.
extra_gated_fields:
? I agree to share my name, email address and username with Meta and confirm that
I have already been granted download access on the Meta website
: checkbox
extra_gated_heading: Access Llama 2 on Hugging Face
inference: false
language:
- en
license: other
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's Llama 2 7b Chat GPTQ
These files are GPTQ model files for [Meta's Llama 2 7b Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
## Prompt template: Llama-2-Chat
```
System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-7b-Chat-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-7b-Chat-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Llama-2-7b-Chat-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's Llama 2 7b Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Samveg17/whisper-base-hi
|
Samveg17
| 2023-07-18T19:23:13Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:google/fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T17:43:44Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper_Samveg17@
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_Samveg17@
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5117
- Wer: 37.9539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1401 | 4.72 | 1000 | 0.3607 | 39.9494 |
| 0.0174 | 9.43 | 2000 | 0.4239 | 38.9954 |
| 0.0022 | 14.15 | 3000 | 0.4867 | 38.4698 |
| 0.001 | 18.87 | 4000 | 0.5117 | 37.9539 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mulinski/mt5-small-finetuned-amazon-en-es
|
mulinski
| 2023-07-18T19:21:05Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-18T17:39:25Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0340
- Rouge1: 17.354
- Rouge2: 8.4787
- Rougel: 17.1305
- Rougelsum: 17.0075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0197 | 1.0 | 1209 | 3.3037 | 13.683 | 5.3875 | 13.0828 | 13.1122 |
| 3.9145 | 2.0 | 2418 | 3.1418 | 15.5264 | 7.4742 | 14.8131 | 14.7471 |
| 3.5987 | 3.0 | 3627 | 3.0970 | 17.4004 | 8.5468 | 16.8991 | 16.8763 |
| 3.4274 | 4.0 | 4836 | 3.0672 | 16.7503 | 7.9732 | 16.2399 | 16.1352 |
| 3.3241 | 5.0 | 6045 | 3.0648 | 16.6407 | 8.1366 | 16.4552 | 16.3217 |
| 3.2468 | 6.0 | 7254 | 3.0444 | 17.2806 | 8.6183 | 17.0437 | 16.8567 |
| 3.2116 | 7.0 | 8463 | 3.0370 | 17.6282 | 8.6565 | 17.2977 | 17.2007 |
| 3.1821 | 8.0 | 9672 | 3.0340 | 17.354 | 8.4787 | 17.1305 | 17.0075 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
usakha/Prophetnet_multiNews_model
|
usakha
| 2023-07-18T19:14:41Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"prophetnet",
"text2text-generation",
"summarization",
"en",
"dataset:multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-27T10:50:18Z |
---
datasets:
- multi_news
language:
- en
metrics:
- bleu
- rouge
library_name: transformers
pipeline_tag: summarization
---
# Hyperparameters
learning_rate=2e-5
per_device_train_batch_size=14
per_device_eval_batch_size=14
weight_decay=0.01
save_total_limit=3
num_train_epochs=3
predict_with_generate=True
fp16=True
# Training Output
global_step=7710,
training_loss=2.8554159399445727,
metrics={'train_runtime': 21924.7566,
'train_samples_per_second': 4.923,
'train_steps_per_second': 0.352,
'total_flos': 2.3807388210639667e+17,
'train_loss': 2.8554159399445727,
'epoch': 3.0}
# Training Results
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- |
1| 2.981200| 2.831641| 0.414500| 0.147000| 0.230700| 0.230600| 0.512800| 140.734900|
2 |2.800900| 2.789402| 0.417300| 0.148400| 0.231800| 0.231700| 0.516000| 141.158200|
3 |2.680300| 2.780862| 0.418300| 0.148400| 0.232200| 0.232100| 0.516800| 140.872300|
|
usakha/Pegasus_GovReport_model
|
usakha
| 2023-07-18T19:13:12Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"dataset:ccdv/govreport-summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-22T11:16:50Z |
---
datasets:
- ccdv/govreport-summarization
metrics:
- rouge
- bleu
pipeline_tag: summarization
---
# Hyperparameters
learning_rate=2e-5
per_device_train_batch_size=14
per_device_eval_batch_size=14
weight_decay=0.01
save_total_limit=3
num_train_epochs=3
predict_with_generate=True
fp16=True
# Training Output
global_step=3003,
training_loss=2.0113779983241042,
metrics={'train_runtime': 12268.4376,
'train_samples_per_second': 3.427,
'train_steps_per_second': 0.245,
'total_flos': 1.2147019450889011e+17,
'train_loss': 2.0113779983241042,
'epoch': 3.0}
# Training Results
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- |
1| 2.035800| 1.906599| 0.365400| 0.150500| 0.243200 |0.243500 |0.366300| 227.230300|
2| 1.976100| 1.878923| 0.393700| 0.167800| 0.263500 |0.263800 |0.423600| 193.114200|
3| 1.956800| 1.871454| 0.409300| 0.175100| 0.273400 |0.273600 |0.457000| 172.294500|
|
usakha/Pegasus_MedPaper_model
|
usakha
| 2023-07-18T19:12:02Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:pszemraj/scientific_lay_summarisation-plos-norm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T23:09:42Z |
---
datasets:
- pszemraj/scientific_lay_summarisation-plos-norm
language:
- en
metrics:
- bleu
- rouge
pipeline_tag: summarization
---
# Hyperparameters
learning_rate=2e-5
per_device_train_batch_size=14
per_device_eval_batch_size=14
weight_decay=0.01
save_total_limit=3
num_train_epochs=3
predict_with_generate=True
fp16=True
# Training Output
global_step=4248,
training_loss=2.4160910424988598,
metrics={'train_runtime': 14565.4519,
'train_samples_per_second': 4.082,
'train_steps_per_second': 0.292,
'total_flos': 1.7179021728232243e+17,
'train_loss': 2.4160910424988598,
'epoch': 3.0}
# Training Results
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- |
|1| 2.467100| 2.303269| 0.410900| 0.136200| 0.235900| 0.235900| 0.465700| 182.332800
|2| 2.386700| 2.281062| 0.426300| 0.142300| 0.246800| 0.246700| 0.525200| 143.990900
|3| 2.362000| 2.274931| 0.428400| 0.143800| 0.248300| 0.248200| 0.532000| 139.585900
|
usakha/Pegasus_multiNews_model
|
usakha
| 2023-07-18T19:10:55Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"dataset:multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-20T20:06:32Z |
---
datasets:
- multi_news
metrics:
- bleu
- rouge
pipeline_tag: summarization
---
# Hyperparameters
learning_rate=2e-5
per_device_train_batch_size=14
per_device_eval_batch_size=14
weight_decay=0.01
save_total_limit=3
num_train_epochs=3
predict_with_generate=True
fp16=True
# Training Output
global_step=7710,
training_loss=2.436398018566087,
metrics={'train_runtime': 30287.1254,
'train_samples_per_second': 3.564,
'train_steps_per_second': 0.255,
'total_flos': 3.1186278368988365e+17,
'train_loss': 2.436398018566087,
'epoch': 3.0}
# Training Results
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- |
1| 2.451200| 2.291708| 0.322800| 0.110100| 0.194600| 0.194700| 0.368400| 150.224300
2| 2.527300| nan| 0.296400| 0.100100| 0.181800| 0.181900 |0.317300| 137.569200
3| 2.523800| nan |0.296600| 0.100000| 0.181800 |0.181900 |0.317200| 137.254000
|
xacl/wav2vec2-base-timit-demo-colab
|
xacl
| 2023-07-18T19:09:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T23:34:57Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5532
- Wer: 0.3373
- Cer: 0.1112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.1293 | 1.0 | 500 | 0.3918 | 0.3677 | 0.1170 |
| 0.133 | 2.01 | 1000 | 0.4392 | 0.3797 | 0.1234 |
| 0.1473 | 3.01 | 1500 | 0.4959 | 0.3914 | 0.1267 |
| 0.1373 | 4.02 | 2000 | 0.4781 | 0.3851 | 0.1260 |
| 0.1259 | 5.02 | 2500 | 0.4473 | 0.3810 | 0.1237 |
| 0.1123 | 6.02 | 3000 | 0.5314 | 0.3774 | 0.1243 |
| 0.1086 | 7.03 | 3500 | 0.4231 | 0.3801 | 0.1228 |
| 0.0956 | 8.03 | 4000 | 0.5203 | 0.3734 | 0.1236 |
| 0.0839 | 9.04 | 4500 | 0.5310 | 0.3750 | 0.1227 |
| 0.0778 | 10.04 | 5000 | 0.5279 | 0.3793 | 0.1257 |
| 0.0772 | 11.04 | 5500 | 0.4969 | 0.3792 | 0.1265 |
| 0.072 | 12.05 | 6000 | 0.5489 | 0.3701 | 0.1239 |
| 0.0678 | 13.05 | 6500 | 0.5123 | 0.3669 | 0.1207 |
| 0.067 | 14.06 | 7000 | 0.4969 | 0.3663 | 0.1192 |
| 0.061 | 15.06 | 7500 | 0.4742 | 0.3664 | 0.1212 |
| 0.0575 | 16.06 | 8000 | 0.5304 | 0.3643 | 0.1194 |
| 0.0574 | 17.07 | 8500 | 0.4936 | 0.3729 | 0.1218 |
| 0.0474 | 18.07 | 9000 | 0.5363 | 0.3601 | 0.1185 |
| 0.0447 | 19.08 | 9500 | 0.5347 | 0.3552 | 0.1177 |
| 0.0372 | 20.08 | 10000 | 0.5372 | 0.3519 | 0.1157 |
| 0.0325 | 21.08 | 10500 | 0.5455 | 0.3525 | 0.1159 |
| 0.0309 | 22.09 | 11000 | 0.5193 | 0.3514 | 0.1146 |
| 0.0314 | 23.09 | 11500 | 0.5402 | 0.3494 | 0.1160 |
| 0.0272 | 24.1 | 12000 | 0.5309 | 0.3457 | 0.1129 |
| 0.0238 | 25.1 | 12500 | 0.5490 | 0.3447 | 0.1132 |
| 0.0217 | 26.1 | 13000 | 0.5702 | 0.3406 | 0.1117 |
| 0.0225 | 27.11 | 13500 | 0.5575 | 0.3414 | 0.1116 |
| 0.0189 | 28.11 | 14000 | 0.5572 | 0.3391 | 0.1115 |
| 0.0179 | 29.12 | 14500 | 0.5532 | 0.3373 | 0.1112 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
jordyvl/39-tiny_tobacco3482_kd_NKD_t1.0_g1.5
|
jordyvl
| 2023-07-18T19:08:19Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-18T18:33:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 39-tiny_tobacco3482_kd_NKD_t1.0_g1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 39-tiny_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0812
- Accuracy: 0.835
- Brier Loss: 0.2748
- Nll: 1.2215
- F1 Micro: 0.835
- F1 Macro: 0.8213
- Ece: 0.1443
- Aurc: 0.0548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 5.1938 | 0.095 | 1.0201 | 8.6917 | 0.095 | 0.0778 | 0.3242 | 0.9009 |
| No log | 2.0 | 14 | 4.3129 | 0.13 | 0.9109 | 8.2379 | 0.13 | 0.0910 | 0.2544 | 0.8466 |
| No log | 3.0 | 21 | 3.9690 | 0.225 | 0.8600 | 6.8547 | 0.225 | 0.1401 | 0.2626 | 0.6398 |
| No log | 4.0 | 28 | 3.8651 | 0.375 | 0.7978 | 5.6610 | 0.375 | 0.2964 | 0.3198 | 0.4692 |
| No log | 5.0 | 35 | 3.8115 | 0.465 | 0.7222 | 3.4731 | 0.465 | 0.3435 | 0.3007 | 0.3464 |
| No log | 6.0 | 42 | 3.7351 | 0.575 | 0.6691 | 2.6672 | 0.575 | 0.4736 | 0.3509 | 0.2284 |
| No log | 7.0 | 49 | 3.6913 | 0.62 | 0.6152 | 2.6026 | 0.62 | 0.4700 | 0.3255 | 0.1827 |
| No log | 8.0 | 56 | 3.6687 | 0.68 | 0.5820 | 1.9726 | 0.68 | 0.5400 | 0.3735 | 0.1472 |
| No log | 9.0 | 63 | 3.6771 | 0.645 | 0.5464 | 1.9938 | 0.645 | 0.5211 | 0.3013 | 0.1595 |
| No log | 10.0 | 70 | 3.6759 | 0.685 | 0.4884 | 1.9735 | 0.685 | 0.5678 | 0.2672 | 0.1278 |
| No log | 11.0 | 77 | 3.6587 | 0.71 | 0.4696 | 2.0625 | 0.7100 | 0.6080 | 0.2956 | 0.1115 |
| No log | 12.0 | 84 | 3.6317 | 0.72 | 0.4121 | 2.2088 | 0.72 | 0.6137 | 0.2372 | 0.0925 |
| No log | 13.0 | 91 | 3.6799 | 0.745 | 0.4167 | 2.0639 | 0.745 | 0.6372 | 0.2480 | 0.0978 |
| No log | 14.0 | 98 | 3.6191 | 0.745 | 0.3850 | 1.9955 | 0.745 | 0.6384 | 0.2363 | 0.0728 |
| No log | 15.0 | 105 | 3.6813 | 0.715 | 0.3814 | 2.0731 | 0.715 | 0.6026 | 0.1995 | 0.0918 |
| No log | 16.0 | 112 | 3.6394 | 0.75 | 0.3644 | 1.9093 | 0.75 | 0.6492 | 0.1904 | 0.0777 |
| No log | 17.0 | 119 | 3.7661 | 0.735 | 0.3786 | 1.5402 | 0.735 | 0.6352 | 0.2032 | 0.0982 |
| No log | 18.0 | 126 | 3.6849 | 0.79 | 0.3369 | 1.8761 | 0.79 | 0.6965 | 0.1954 | 0.0708 |
| No log | 19.0 | 133 | 3.6776 | 0.775 | 0.3358 | 1.4981 | 0.775 | 0.7021 | 0.1919 | 0.0744 |
| No log | 20.0 | 140 | 3.6814 | 0.755 | 0.3546 | 1.5225 | 0.755 | 0.6873 | 0.1840 | 0.0794 |
| No log | 21.0 | 147 | 3.6948 | 0.775 | 0.3267 | 1.4776 | 0.775 | 0.7052 | 0.1630 | 0.0710 |
| No log | 22.0 | 154 | 3.7210 | 0.795 | 0.3191 | 1.3634 | 0.795 | 0.7383 | 0.1737 | 0.0705 |
| No log | 23.0 | 161 | 3.7231 | 0.805 | 0.3062 | 1.3141 | 0.805 | 0.7679 | 0.1629 | 0.0665 |
| No log | 24.0 | 168 | 3.7322 | 0.815 | 0.2903 | 1.2030 | 0.815 | 0.7771 | 0.1789 | 0.0609 |
| No log | 25.0 | 175 | 3.7237 | 0.815 | 0.3020 | 1.1721 | 0.815 | 0.7947 | 0.1759 | 0.0603 |
| No log | 26.0 | 182 | 3.8243 | 0.8 | 0.3138 | 1.3356 | 0.8000 | 0.7699 | 0.1735 | 0.0720 |
| No log | 27.0 | 189 | 3.7675 | 0.81 | 0.3038 | 1.2662 | 0.81 | 0.7853 | 0.1891 | 0.0699 |
| No log | 28.0 | 196 | 3.8006 | 0.81 | 0.2992 | 1.3422 | 0.81 | 0.7805 | 0.1709 | 0.0698 |
| No log | 29.0 | 203 | 3.7783 | 0.815 | 0.3009 | 1.3322 | 0.815 | 0.7959 | 0.1729 | 0.0669 |
| No log | 30.0 | 210 | 3.7547 | 0.835 | 0.2775 | 0.9761 | 0.835 | 0.8228 | 0.1751 | 0.0566 |
| No log | 31.0 | 217 | 3.7810 | 0.82 | 0.2905 | 1.1472 | 0.82 | 0.7953 | 0.1670 | 0.0631 |
| No log | 32.0 | 224 | 3.7935 | 0.82 | 0.2732 | 1.2016 | 0.82 | 0.7967 | 0.1429 | 0.0590 |
| No log | 33.0 | 231 | 3.7871 | 0.83 | 0.2774 | 1.2459 | 0.83 | 0.8134 | 0.1495 | 0.0562 |
| No log | 34.0 | 238 | 3.7689 | 0.815 | 0.2756 | 1.1135 | 0.815 | 0.7825 | 0.1609 | 0.0596 |
| No log | 35.0 | 245 | 3.8169 | 0.81 | 0.2801 | 1.2621 | 0.81 | 0.7880 | 0.1570 | 0.0624 |
| No log | 36.0 | 252 | 3.7973 | 0.82 | 0.2729 | 1.1310 | 0.82 | 0.7894 | 0.1466 | 0.0585 |
| No log | 37.0 | 259 | 3.8560 | 0.835 | 0.2825 | 1.3222 | 0.835 | 0.8114 | 0.1466 | 0.0606 |
| No log | 38.0 | 266 | 3.8351 | 0.83 | 0.2892 | 1.2548 | 0.83 | 0.8178 | 0.1489 | 0.0593 |
| No log | 39.0 | 273 | 3.8258 | 0.82 | 0.2711 | 1.1900 | 0.82 | 0.8037 | 0.1455 | 0.0589 |
| No log | 40.0 | 280 | 3.8288 | 0.815 | 0.2840 | 1.2167 | 0.815 | 0.7913 | 0.1574 | 0.0619 |
| No log | 41.0 | 287 | 3.8264 | 0.82 | 0.2790 | 1.1737 | 0.82 | 0.8020 | 0.1394 | 0.0609 |
| No log | 42.0 | 294 | 3.8276 | 0.81 | 0.2797 | 1.1603 | 0.81 | 0.7888 | 0.1585 | 0.0580 |
| No log | 43.0 | 301 | 3.8554 | 0.815 | 0.2771 | 1.1695 | 0.815 | 0.7943 | 0.1310 | 0.0594 |
| No log | 44.0 | 308 | 3.8405 | 0.825 | 0.2768 | 1.1593 | 0.825 | 0.8149 | 0.1413 | 0.0569 |
| No log | 45.0 | 315 | 3.8640 | 0.815 | 0.2891 | 1.1752 | 0.815 | 0.7980 | 0.1516 | 0.0590 |
| No log | 46.0 | 322 | 3.8624 | 0.825 | 0.2653 | 1.1548 | 0.825 | 0.8024 | 0.1384 | 0.0581 |
| No log | 47.0 | 329 | 3.8546 | 0.83 | 0.2766 | 1.1634 | 0.83 | 0.8106 | 0.1411 | 0.0594 |
| No log | 48.0 | 336 | 3.8652 | 0.82 | 0.2805 | 1.1651 | 0.82 | 0.8069 | 0.1278 | 0.0581 |
| No log | 49.0 | 343 | 3.8716 | 0.83 | 0.2758 | 1.1895 | 0.83 | 0.8065 | 0.1486 | 0.0590 |
| No log | 50.0 | 350 | 3.8720 | 0.815 | 0.2737 | 1.1709 | 0.815 | 0.7937 | 0.1375 | 0.0578 |
| No log | 51.0 | 357 | 3.8812 | 0.82 | 0.2762 | 1.2348 | 0.82 | 0.7993 | 0.1292 | 0.0600 |
| No log | 52.0 | 364 | 3.8844 | 0.805 | 0.2815 | 1.0870 | 0.805 | 0.7843 | 0.1525 | 0.0581 |
| No log | 53.0 | 371 | 3.8968 | 0.825 | 0.2704 | 1.2235 | 0.825 | 0.8011 | 0.1452 | 0.0582 |
| No log | 54.0 | 378 | 3.8996 | 0.81 | 0.2788 | 1.3264 | 0.81 | 0.7909 | 0.1453 | 0.0573 |
| No log | 55.0 | 385 | 3.9037 | 0.81 | 0.2757 | 1.2231 | 0.81 | 0.7928 | 0.1307 | 0.0574 |
| No log | 56.0 | 392 | 3.9024 | 0.81 | 0.2775 | 1.2369 | 0.81 | 0.7869 | 0.1493 | 0.0581 |
| No log | 57.0 | 399 | 3.8951 | 0.83 | 0.2722 | 1.2151 | 0.83 | 0.8171 | 0.1491 | 0.0556 |
| No log | 58.0 | 406 | 3.9224 | 0.82 | 0.2741 | 1.2957 | 0.82 | 0.8001 | 0.1351 | 0.0575 |
| No log | 59.0 | 413 | 3.9397 | 0.805 | 0.2782 | 1.3017 | 0.805 | 0.7870 | 0.1342 | 0.0584 |
| No log | 60.0 | 420 | 3.9250 | 0.835 | 0.2721 | 1.2251 | 0.835 | 0.8151 | 0.1466 | 0.0570 |
| No log | 61.0 | 427 | 3.9381 | 0.825 | 0.2753 | 1.2330 | 0.825 | 0.8044 | 0.1384 | 0.0577 |
| No log | 62.0 | 434 | 3.9475 | 0.82 | 0.2759 | 1.2171 | 0.82 | 0.8054 | 0.1485 | 0.0576 |
| No log | 63.0 | 441 | 3.9591 | 0.83 | 0.2761 | 1.2299 | 0.83 | 0.8122 | 0.1551 | 0.0568 |
| No log | 64.0 | 448 | 3.9496 | 0.835 | 0.2709 | 1.2282 | 0.835 | 0.8223 | 0.1397 | 0.0559 |
| No log | 65.0 | 455 | 3.9360 | 0.83 | 0.2688 | 1.2238 | 0.83 | 0.8171 | 0.1384 | 0.0535 |
| No log | 66.0 | 462 | 3.9594 | 0.835 | 0.2733 | 1.2395 | 0.835 | 0.8094 | 0.1540 | 0.0563 |
| No log | 67.0 | 469 | 3.9648 | 0.84 | 0.2700 | 1.2154 | 0.8400 | 0.8252 | 0.1673 | 0.0557 |
| No log | 68.0 | 476 | 3.9725 | 0.83 | 0.2712 | 1.2297 | 0.83 | 0.8171 | 0.1248 | 0.0552 |
| No log | 69.0 | 483 | 3.9844 | 0.835 | 0.2719 | 1.2243 | 0.835 | 0.8151 | 0.1605 | 0.0557 |
| No log | 70.0 | 490 | 3.9845 | 0.83 | 0.2699 | 1.2288 | 0.83 | 0.8100 | 0.1223 | 0.0553 |
| No log | 71.0 | 497 | 3.9986 | 0.835 | 0.2729 | 1.2206 | 0.835 | 0.8223 | 0.1381 | 0.0556 |
| 3.4116 | 72.0 | 504 | 3.9973 | 0.835 | 0.2727 | 1.2242 | 0.835 | 0.8223 | 0.1446 | 0.0553 |
| 3.4116 | 73.0 | 511 | 4.0092 | 0.835 | 0.2733 | 1.2226 | 0.835 | 0.8223 | 0.1482 | 0.0554 |
| 3.4116 | 74.0 | 518 | 4.0072 | 0.83 | 0.2714 | 1.2248 | 0.83 | 0.8152 | 0.1219 | 0.0549 |
| 3.4116 | 75.0 | 525 | 4.0168 | 0.835 | 0.2742 | 1.2200 | 0.835 | 0.8223 | 0.1329 | 0.0551 |
| 3.4116 | 76.0 | 532 | 4.0223 | 0.835 | 0.2737 | 1.2248 | 0.835 | 0.8213 | 0.1380 | 0.0552 |
| 3.4116 | 77.0 | 539 | 4.0250 | 0.84 | 0.2719 | 1.2208 | 0.8400 | 0.8252 | 0.1405 | 0.0551 |
| 3.4116 | 78.0 | 546 | 4.0338 | 0.835 | 0.2745 | 1.2242 | 0.835 | 0.8213 | 0.1536 | 0.0551 |
| 3.4116 | 79.0 | 553 | 4.0380 | 0.835 | 0.2740 | 1.2234 | 0.835 | 0.8213 | 0.1494 | 0.0552 |
| 3.4116 | 80.0 | 560 | 4.0445 | 0.835 | 0.2744 | 1.2223 | 0.835 | 0.8213 | 0.1500 | 0.0555 |
| 3.4116 | 81.0 | 567 | 4.0449 | 0.835 | 0.2735 | 1.2209 | 0.835 | 0.8213 | 0.1504 | 0.0552 |
| 3.4116 | 82.0 | 574 | 4.0515 | 0.835 | 0.2747 | 1.2228 | 0.835 | 0.8213 | 0.1526 | 0.0549 |
| 3.4116 | 83.0 | 581 | 4.0534 | 0.835 | 0.2743 | 1.2226 | 0.835 | 0.8213 | 0.1501 | 0.0548 |
| 3.4116 | 84.0 | 588 | 4.0572 | 0.835 | 0.2740 | 1.2225 | 0.835 | 0.8213 | 0.1447 | 0.0550 |
| 3.4116 | 85.0 | 595 | 4.0605 | 0.835 | 0.2743 | 1.2222 | 0.835 | 0.8213 | 0.1466 | 0.0548 |
| 3.4116 | 86.0 | 602 | 4.0621 | 0.835 | 0.2744 | 1.2215 | 0.835 | 0.8213 | 0.1427 | 0.0548 |
| 3.4116 | 87.0 | 609 | 4.0653 | 0.835 | 0.2745 | 1.2214 | 0.835 | 0.8213 | 0.1439 | 0.0549 |
| 3.4116 | 88.0 | 616 | 4.0673 | 0.835 | 0.2746 | 1.2217 | 0.835 | 0.8213 | 0.1410 | 0.0548 |
| 3.4116 | 89.0 | 623 | 4.0705 | 0.835 | 0.2748 | 1.2214 | 0.835 | 0.8213 | 0.1440 | 0.0549 |
| 3.4116 | 90.0 | 630 | 4.0717 | 0.835 | 0.2744 | 1.2217 | 0.835 | 0.8213 | 0.1426 | 0.0547 |
| 3.4116 | 91.0 | 637 | 4.0740 | 0.835 | 0.2747 | 1.2217 | 0.835 | 0.8213 | 0.1432 | 0.0548 |
| 3.4116 | 92.0 | 644 | 4.0753 | 0.835 | 0.2748 | 1.2217 | 0.835 | 0.8213 | 0.1442 | 0.0547 |
| 3.4116 | 93.0 | 651 | 4.0763 | 0.835 | 0.2746 | 1.2214 | 0.835 | 0.8213 | 0.1434 | 0.0546 |
| 3.4116 | 94.0 | 658 | 4.0777 | 0.835 | 0.2746 | 1.2213 | 0.835 | 0.8213 | 0.1433 | 0.0547 |
| 3.4116 | 95.0 | 665 | 4.0788 | 0.835 | 0.2747 | 1.2217 | 0.835 | 0.8213 | 0.1442 | 0.0547 |
| 3.4116 | 96.0 | 672 | 4.0800 | 0.835 | 0.2748 | 1.2217 | 0.835 | 0.8213 | 0.1466 | 0.0547 |
| 3.4116 | 97.0 | 679 | 4.0802 | 0.835 | 0.2747 | 1.2215 | 0.835 | 0.8213 | 0.1435 | 0.0547 |
| 3.4116 | 98.0 | 686 | 4.0808 | 0.835 | 0.2747 | 1.2214 | 0.835 | 0.8213 | 0.1435 | 0.0547 |
| 3.4116 | 99.0 | 693 | 4.0811 | 0.835 | 0.2748 | 1.2214 | 0.835 | 0.8213 | 0.1443 | 0.0547 |
| 3.4116 | 100.0 | 700 | 4.0812 | 0.835 | 0.2748 | 1.2215 | 0.835 | 0.8213 | 0.1443 | 0.0548 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
usakha/Bart_multiNews_model
|
usakha
| 2023-07-18T19:06:08Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"dataset:multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-26T10:51:55Z |
---
datasets:
- multi_news
metrics:
- bleu
- rouge
pipeline_tag: summarization
---
# Hyperparameters
learning_rate=2e-5
per_device_train_batch_size=14
per_device_eval_batch_size=14
weight_decay=0.01
save_total_limit=3
num_train_epochs=3
predict_with_generate=True
fp16=True
# Training Output
global_step=7710
training_loss=2.1297076629757417
metrics={'train_runtime': 6059.0418,
'train_samples_per_second': 17.813,
'train_steps_per_second': 1.272,
'total_flos': 2.3389776681055027e+17,
'train_loss': 2.1297076629757417,
'epoch': 3.0}
# Training Results
| Epoch | Training Loss | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:----- |:------------ |:--------------- |:-------- | :------- |:-------- |:--------- |:-------- |:--------- |
| 1 | 2.223100 | 2.038599 | 0.147400 | 0.054800 | 0.113500 | 0.113500 | 0.001400 | 20.000000 |
| 2 | 2.078100 | 2.009619 | 0.152900 | 0.057800 | 0.117000 | 0.117000 | 0.001600 | 20.000000 |
| 3 | 1.989000 | 2.006006 | 0.152900 | 0.057300 | 0.116700 | 0.116700 | 0.001700 | 20.000000 |
|
shreeramchandra/ser_wav2vec_indianEnglish_greek_pretrained
|
shreeramchandra
| 2023-07-18T19:05:56Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-13T03:19:34Z |
---
license: afl-3.0
language:
- en
pipeline_tag: audio-classification
---
|
guinmoon/open_llama_3b_v2_ggml
|
guinmoon
| 2023-07-18T19:05:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-18T13:43:57Z |
Original model [here](https://huggingface.co/openlm-research/open_llama_3b_v2).
|
uraskargi/dqn-SpaceInvadersNoFrameskip-v4
|
uraskargi
| 2023-07-18T18:54:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-18T18:54:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 733.50 +/- 298.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga uraskargi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga uraskargi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga uraskargi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jamesdborin/ct2-int8-mtb-7b-chat
|
jamesdborin
| 2023-07-18T18:51:00Z | 6 | 0 |
transformers
|
[
"transformers",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:Hello-SimpleAI/HC3",
"dataset:tatsu-lab/alpaca",
"dataset:Anthropic/hh-rlhf",
"dataset:victor123/evol_instruct_70k",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-18T18:42:56Z |
---
license: cc-by-nc-sa-4.0
datasets:
- jeffwan/sharegpt_vicuna
- Hello-SimpleAI/HC3
- tatsu-lab/alpaca
- Anthropic/hh-rlhf
- victor123/evol_instruct_70k
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Chat
MPT-7B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**SamIAm85**:
>I want you to come up with a tweet based on this summary of the article:
>"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
>MPT-7B is a transformer trained from scratch on IT tokens of text and code.
>It is open source and it matches the quality of LLaMA-7B.
>MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
>Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
>For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
**MPT-7B-Instruct**:
>Here's my suggestion for a tweet:
>"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
**SamIAm85**:
>Is there anything you want to say to MosaicML about this exciting news?
**MPT-7B-Instruct**:
> As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
> This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
> Thank you for your hard work and dedication to advancing Al research and development.
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
### Training Configuration
This model was trained on 8 A100-80GBs for about 8.2 hours, followed by training for 6.7 hours on 32 A100-40GBs using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
kuelumbus/polyBERT
|
kuelumbus
| 2023-07-18T18:47:54Z | 13,042 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-09-15T13:54:32Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "[*]CC[*]"
sentences:
- "[*]COC[*]"
- "[*]CC(C)C[*]"
---
# kuelumbus/polyBERT
This is polyBERT: A chemical language model to enable fully machine-driven ultrafast polymer informatics. polyBERT maps PSMILES strings to 600 dimensional dense fingerprints. The fingerprints numerically represent polymer chemical structures. Please see the license agreement in the LICENSE file.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
psmiles_strings = ["[*]CC[*]", "[*]COC[*]"]
polyBERT = SentenceTransformer('kuelumbus/polyBERT')
embeddings = polyBERT.encode(psmiles_strings)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
psmiles_strings = ["[*]CC[*]", "[*]COC[*]"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kuelumbus/polyBERT')
polyBERT = AutoModel.from_pretrained('kuelumbus/polyBERT')
# Tokenize sentences
encoded_input = tokenizer(psmiles_strings, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = polyBERT(**encoded_input)
# Perform pooling. In this case, mean pooling.
fingerprints = mean_pooling(model_output, encoded_input['attention_mask'])
print("Fingerprints:")
print(fingerprints)
```
## Evaluation Results
See https://github.com/Ramprasad-Group/polyBERT and paper on arXiv.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 600, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Kuenneth, C., Ramprasad, R. polyBERT: a chemical language model to enable fully machine-driven ultrafast polymer informatics. Nat Commun 14, 4099 (2023). https://doi.org/10.1038/s41467-023-39868-6
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.