modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 00:47:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 00:46:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
marksverdhei/t5-deshuffle
|
marksverdhei
| 2023-04-30T09:54:54Z | 119 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:stas/c4-en-10k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-23T20:57:17Z |
---
language: en
widget:
- text: ' brown dog fox jumped lazy over quick the the '
datasets:
- 'stas/c4-en-10k'
---
# T5-deshuffle
Bag Of Words (BOW) is a simple and typical encoding for making statistical models discover patterns in language
However BOW is a lossy compression that eliminates a very important feature of text: order
This model is trained to learn the most probable order of an unordered token sequence,
using a subset of the c4 dataset, and can thus be seen as a "bag-of-words decoder".
Currently, it does not perform well. I'm planning to re-train on a larger subset of c4 later (after may).
How to run:
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("marksverdhei/t5-deshuffle")
model = T5ForConditionalGeneration.from_pretrained("marksverdhei/t5-deshuffle")
prompt = ' brown dog fox jumped lazy over quick the the '
ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_tokens, = model.generate(ids)
print(tokenizer.decode(generated_tokens, skip_special_tokens=True))
```
|
cruiser/distilbert_model_kaggle
|
cruiser
| 2023-04-30T09:54:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-30T09:04:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cruiser/distilbert_model_kaggle
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cruiser/distilbert_model_kaggle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0986
- Train Accuracy: 0.4049
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.1284 | 0.4020 | 0 |
| 1.0986 | 0.4049 | 1 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
bessho/xlm-roberta-base-finetuned-panx-fr
|
bessho
| 2023-04-30T09:52:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-30T09:47:39Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8404237430637297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- F1: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.571 | 1.0 | 191 | 0.3288 | 0.7826 |
| 0.2554 | 2.0 | 382 | 0.2857 | 0.8261 |
| 0.1688 | 3.0 | 573 | 0.2716 | 0.8404 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
yotoshihiro/q-FrozenLake-v1-4x4-noSlippery
|
yotoshihiro
| 2023-04-30T09:49:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T09:49:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="brinkman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Benjo27/sd-class-butterflies-32
|
Benjo27
| 2023-04-30T09:39:21Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-04-30T09:38:02Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Benjo27/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
research-backup/mbart-large-cc25-frquad-qa
|
research-backup
| 2023-04-30T09:36:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-25T19:28:37Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu."
example_title: "Question Answering Example 1"
- text: "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire."
example_title: "Question Answering Example 2"
model-index:
- name: lmqg/mbart-large-cc25-frquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 26.33
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 38.14
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 31.8
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 92.2
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 77.16
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 60.48
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 39.34
---
# Model Card of `lmqg/mbart-large-cc25-frquad-qa`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qa")
# model prediction
answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qa")
output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 39.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| AnswerF1Score | 60.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| BERTScore | 92.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 37.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 32.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 29.23 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 26.33 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 31.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 38.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 32
- lr: 0.0002
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
mgarciav/ppo-LunarLander-v2
|
mgarciav
| 2023-04-30T09:31:30Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T09:31:22Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -183.95 +/- 112.14
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo-exp'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mgarciav/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
kucharskipj/ppo-LunarLander-v2
|
kucharskipj
| 2023-04-30T09:17:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-15T23:16:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.83 +/- 9.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jeferyai/myusemodel
|
jeferyai
| 2023-04-30T09:01:29Z | 0 | 23 | null |
[
"region:us"
] | null | 2023-03-11T14:10:24Z |
# This my using model
# 這是我個人使用的模型
# !!以下資料都是蒐集得來,方便我安裝模組,請勿隨意使用,本人不負責!!
# Fine-tuned Model Checkpoints(~/models/Stable-diffusion/)
## 個人備份系列
## chilloutmix系列
chilloutmix_.safetensors、
chilloutmix_Ni.safetensors、
chilloutmix_NiPrunedFp16.safetensors、
chilloutmix_NiPrunedFp16Fix.safetensors、
chilloutmix_NiPrunedFp32.safetensors、
chilloutmix_NiPrunedFp32Fix.safetensors
https://civitai.com/models/6424/chilloutmix
## anything系列
anything-v4.5.safetensors、
anything-v4.0.vae.pt
https://huggingface.co/andite/anything-v4.0/tree/main
## deliberate系列
deliberate_v2.safetensors、
deliberate_v11.safetensors、
deliberate_v1.safetensors
https://civitai.com/models/4823/deliberate
## CyriousMix系列
cyriousmix_14.safetensors、
cyriousmix_v12.safetensors、
cyriousmix_v1Weirdsperiment5.ckpt
https://civitai.com/models/6260/cyriousmix
## stable-diffusion-2.1系列
v2-1_768-ema-pruned.safetensors、
v2-1_768-nonema-pruned.safetensors
https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main
## YesMix系列
yesmix_v16Original.safetensors、
yesmix_v16.safetensors、
yesmix_v15.safetensors、
yesmix_v10.safetensors
https://civitai.com/models/9139/yesmix
# LORA(~/models/lora/)
## 個人備份系列
koreanDollLikeness_v10.safetensors、
taiwanDollLikeness_v10.safetensors、
japaneseDollLikeness_v10.safetensors、
chilloutmixss20_v20.safetensors、
chilloutmixss30_v30.safetensors
## chilloutmixss_xss10.safetensors
https://civitai.com/models/10850/chilloutmixss
## sxzLeonSKennedyEduard_sxzLeon.safetensors
https://civitai.com/models/16086/sxz-leon-s-kennedy-eduard-badaluta-resident-evil
## Fashion Girl系列
fashionGirl_v50.safetensors、
fashionGirl_v47.safetensors、
fashionGirl_v45.safetensors、
fashionGirl_v40.safetensors、
fashionGirl_v36.safetensors、
fashionGirl_v35.safetensors、
fashionGirl_v30ForSD15AndWaifu.safetensors、
fashionGirl_v30.safetensors、
fashionGirl_v26.safetensors、
fashionGirl_v25.safetensors、
fashionGirl_v20SmallFileSize.safetensors、
fashionGirl_v20.safetensors、
fashionGirl_v10.safetensors
https://civitai.com/models/8217/fashion-girl
## seeThroughSilhouette_v10.safetensors
https://civitai.com/models/11130/see-through-silhouette
# TEXTUAL INVERSION(~/embeddings/)
## Ulzzang-6500系列
ulzzang-6500-v1.1.bin、
ulzzang-6500.pt
https://civitai.com/models/8109/ulzzang-6500-korean-doll-aesthetic
## pureerosface_v1.pt
https://civitai.com/models/4514/pure-eros-face
## bad-hands-5.pt
https://cdn.discordapp.com/attachments/1032948846197747731/1069660323709190195/bad-hands-5.pt
## easynegative.safetensors
https://civitai.com/models/7808/easynegative
# Aesthetic Gradients(~/aesthetic_embeddings/)
# Hypernetwork(~/models/hypernetworks/)
|
Aeala/Alpaca-elina-65b-4bit
|
Aeala
| 2023-04-30T08:39:54Z | 1,455 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T11:36:16Z |
## Model Info
Merge of ChanSung's [Alpaca-LoRA-65B-elina](https://huggingface.co/LLMs/Alpaca-LoRA-65B-elina)
## Benchmarks
Coming soon...
|
xyz99/iorimoelora
|
xyz99
| 2023-04-30T08:36:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T08:32:57Z |
---
license: creativeml-openrail-m
---
|
DreamPerson/LyCORIS
|
DreamPerson
| 2023-04-30T08:22:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T08:09:44Z |
---
license: creativeml-openrail-m
---
|
Fixmouth/Xlxxl
|
Fixmouth
| 2023-04-30T08:16:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T08:13:23Z |
---
license: creativeml-openrail-m
---
|
Apocalypse-19/doom_deadly_corridor
|
Apocalypse-19
| 2023-04-30T08:13:37Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T08:13:29Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_deadly_corridor
type: doom_deadly_corridor
metrics:
- type: mean_reward
value: 17.10 +/- 8.74
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Apocalypse-19/doom_deadly_corridor
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=doom_deadly_corridor
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=doom_deadly_corridor --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
darkblack/TESTING
|
darkblack
| 2023-04-30T07:51:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T07:49:16Z |
---
license: creativeml-openrail-m
---
|
Tengisbold/distilbert-base-multilingual-cased-ner-demo
|
Tengisbold
| 2023-04-30T07:25:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"mn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-30T06:54:50Z |
---
language:
- mn
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-ner-demo
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1687
- Precision: 0.8684
- Recall: 0.8891
- F1: 0.8786
- Accuracy: 0.9693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2009 | 1.0 | 572 | 0.1271 | 0.8074 | 0.8440 | 0.8253 | 0.9590 |
| 0.0951 | 2.0 | 1144 | 0.1069 | 0.8469 | 0.8768 | 0.8616 | 0.9671 |
| 0.063 | 3.0 | 1716 | 0.1136 | 0.8486 | 0.8783 | 0.8632 | 0.9680 |
| 0.0444 | 4.0 | 2288 | 0.1221 | 0.8506 | 0.8808 | 0.8654 | 0.9675 |
| 0.0303 | 5.0 | 2860 | 0.1389 | 0.8576 | 0.8823 | 0.8698 | 0.9677 |
| 0.0217 | 6.0 | 3432 | 0.1457 | 0.8683 | 0.8878 | 0.8779 | 0.9685 |
| 0.0157 | 7.0 | 4004 | 0.1542 | 0.8661 | 0.8873 | 0.8766 | 0.9692 |
| 0.0121 | 8.0 | 4576 | 0.1615 | 0.8730 | 0.8878 | 0.8803 | 0.9694 |
| 0.0094 | 9.0 | 5148 | 0.1675 | 0.8683 | 0.8883 | 0.8782 | 0.9688 |
| 0.0077 | 10.0 | 5720 | 0.1687 | 0.8684 | 0.8891 | 0.8786 | 0.9693 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AlpacaAlice/t5-end2end-questions-generation
|
AlpacaAlice
| 2023-04-30T07:14:58Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T08:20:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5884 | 0.34 | 100 | 1.9159 |
| 1.9705 | 0.68 | 200 | 1.7310 |
| 1.8439 | 1.02 | 300 | 1.6672 |
| 1.7426 | 1.35 | 400 | 1.6382 |
| 1.7147 | 1.69 | 500 | 1.6199 |
| 1.6908 | 2.03 | 600 | 1.6053 |
| 1.6315 | 2.37 | 700 | 1.5967 |
| 1.627 | 2.71 | 800 | 1.5939 |
| 1.6122 | 3.05 | 900 | 1.5877 |
| 1.5706 | 3.39 | 1000 | 1.5861 |
| 1.5708 | 3.73 | 1100 | 1.5742 |
| 1.5534 | 4.06 | 1200 | 1.5798 |
| 1.5351 | 4.4 | 1300 | 1.5738 |
| 1.5226 | 4.74 | 1400 | 1.5757 |
| 1.5187 | 5.08 | 1500 | 1.5727 |
| 1.4963 | 5.42 | 1600 | 1.5710 |
| 1.4841 | 5.76 | 1700 | 1.5668 |
| 1.5025 | 6.1 | 1800 | 1.5688 |
| 1.4778 | 6.44 | 1900 | 1.5717 |
| 1.4769 | 6.77 | 2000 | 1.5674 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ngyewkong/ppo-LunarLander-v2
|
ngyewkong
| 2023-04-30T07:14:36Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T04:45:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 13.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kunto/anass
|
Kunto
| 2023-04-30T06:58:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T06:57:13Z |
---
license: creativeml-openrail-m
---
|
Pi3141/alpaca-7b-native-enhanced-ggml
|
Pi3141
| 2023-04-30T06:04:34Z | 0 | 115 |
adapter-transformers
|
[
"adapter-transformers",
"llama",
"text-generation",
"en",
"license:wtfpl",
"region:us"
] |
text-generation
| 2023-03-30T04:05:10Z |
---
license: wtfpl
language:
- en
pipeline_tag: text-generation
tags:
- llama
library_name: adapter-transformers
---
# Alpaca Native Enhanced 7B model download for Alpaca.cpp, Llama.cpp, and Dalai
Use this command to run with llama.cpp
```sh
main -m models/ANE-7B/ggml-model-q4_1.bin -n -1 --ctx_size 2048 --batch_size 16 --keep 512 --repeat_penalty 1.0 -t 16 --temp 0.4 --top_k 30 --top_p 0.18 --interactive-first -ins --color -i -r "User:" -f prompts/alpacanativeenhanced.txt
```
contents of `prompts/alpacanativeenhanced.txt` should be
```txt
You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consider the conversation history.
User: Hey, how's it going?
Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
```
Original model https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced
|
NoufAlkhorayef/ppo-LunarLander-v2-TEST
|
NoufAlkhorayef
| 2023-04-30T05:53:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T05:53:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.89 +/- 24.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LLMs/Alpaca-LoRA-65B-elina
|
LLMs
| 2023-04-30T05:40:36Z | 0 | 6 | null |
[
"llama",
"llm",
"text2text-generation",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-04-29T01:57:13Z |
---
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- llama
- llm
---
This is LoRA checkpoint fine-tuned with the following CLI. The fine-tuning process is logged in [W&B dashboard](https://wandb.ai/chansung18/alpaca_lora/runs/9atnn649?workspace=user-chansung18). I have used DGX workstation with 8 x A100(40G).
```console
python finetune.py \
--base_model='elinas/llama-65b-hf-transformers-4.29' \
--data_path='alpaca_data.json' \
--num_epochs=10 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./lora-alpaca-65b-elinas' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--lora_alpha=32 \
--batch_size=1024 \
--micro_batch_size=15
```
This LoRA checkpoint is recommended to be used with `transformers >= 4.29` which should be installed with the following command currently(4/30/2023).
```console
pip install git+https://github.com/huggingface/transformers.git
```
|
andyqin18/finetuned-bert-uncased
|
andyqin18
| 2023-04-30T05:34:16Z | 437 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T04:06:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-bert-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model description
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on this [Kaggle dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
It achieves the following results on the evaluation set:
- Loss: 0.0507
## Intended uses
The model is intended to be used for detecting 6 labels of toxicity.
The model takes in a comment as string and predicts the probabilities of the 6 types of toxicity (as float between 0 and 1)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0525 | 1.0 | 1250 | 0.0482 |
| 0.037 | 2.0 | 2500 | 0.0445 |
| 0.0275 | 3.0 | 3750 | 0.0489 |
| 0.0188 | 4.0 | 5000 | 0.0491 |
| 0.0146 | 5.0 | 6250 | 0.0507 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
pradeep4321/valves
|
pradeep4321
| 2023-04-30T05:23:54Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-04-30T05:23:42Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: valves
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1071428582072258
---
# valves
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### ball valves

#### butterfly valves

#### gate valves

#### globe valves

#### pinch valves

|
Pi3141/vicuna-7b-v1.1-ggml
|
Pi3141
| 2023-04-30T04:44:05Z | 0 | 5 |
adapter-transformers
|
[
"adapter-transformers",
"llama",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-04-30T04:03:47Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- llama
library_name: adapter-transformers
---
# Vicuna 7B model download for llama.cpp
All credits go to lmsys for creating the model
https://huggingface.co/lmsys/vicuna-7b-delta-v1.1
|
tanishabhagwanani/distilbert-base-uncased-finetuned-FYP
|
tanishabhagwanani
| 2023-04-30T04:16:00Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-25T11:27:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-FYP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-FYP
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
- Accuracy: 0.9957
- F1: 0.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1435 | 1.0 | 20 | 1.7903 | 0.7696 | 0.7462 |
| 1.5449 | 2.0 | 40 | 1.0549 | 0.9565 | 0.9603 |
| 1.0008 | 3.0 | 60 | 0.5800 | 0.9913 | 0.9912 |
| 0.6252 | 4.0 | 80 | 0.3311 | 0.9957 | 0.9957 |
| 0.3833 | 5.0 | 100 | 0.2076 | 0.9957 | 0.9957 |
| 0.2496 | 6.0 | 120 | 0.1470 | 0.9957 | 0.9957 |
| 0.182 | 7.0 | 140 | 0.1173 | 0.9957 | 0.9957 |
| 0.1475 | 8.0 | 160 | 0.1017 | 0.9957 | 0.9957 |
| 0.1279 | 9.0 | 180 | 0.0944 | 0.9957 | 0.9957 |
| 0.1197 | 10.0 | 200 | 0.0921 | 0.9957 | 0.9957 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aistar/4x_kawaii_mix
|
aistar
| 2023-04-30T03:54:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T03:53:18Z |
---
license: creativeml-openrail-m
---
|
yyassin/ppo-LunarLander-v2
|
yyassin
| 2023-04-30T03:47:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T03:39:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.45 +/- 19.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cyrodw/ppo-LunarLander-v2
|
cyrodw
| 2023-04-30T03:38:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T16:08:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.66 +/- 18.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ntrant7/a2c-PandaReachDense-v2
|
ntrant7
| 2023-04-30T03:35:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-27T09:10:26Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.09 +/- 0.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JosephusCheung/GuanacoVQA
|
JosephusCheung
| 2023-04-30T02:58:40Z | 0 | 17 | null |
[
"visual-question-answering",
"en",
"zh",
"ja",
"de",
"dataset:JosephusCheung/GuanacoVQADataset",
"license:gpl-3.0",
"region:us"
] |
visual-question-answering
| 2023-04-24T13:34:23Z |
---
license: gpl-3.0
datasets:
- JosephusCheung/GuanacoVQADataset
language:
- en
- zh
- ja
- de
pipeline_tag: visual-question-answering
---
The following content is currently a work in progress and does not represent the final quality.
Alignment for the multilingual VQA tasks is being conducted on blip2-flan-t5-xxl and Guanaco using only Linear Layers.
The latest weight file is provided here, based on the implementation of MiniGPT-4.
This model supports English, Chinese, Japanese, and German languages and requires the combined use of the Guanaco 7B LLM model.
A portion of the dataset has already been released.
|
theastro/cliv-beta1
|
theastro
| 2023-04-30T02:54:55Z | 30 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-30T02:33:12Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
# <u>Cliv v1 (Beta)</u>
# <h1>You can visite our page in [cliv.art](https://cliv.art/)</h1>
## Trained by [theastro](https://huggingface.co/theastro/)
This model was trained on Stable Diffusion v1.5
# Some arts:
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00054-3468404438.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00050-3574198102.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00055-2502901918.png" alt="cliv-v1" width="312px" height="auto">
<img style="border-radius: 25px;" src="https://huggingface.co/theastro/cliv-beta1/resolve/main/sample_images/00085-714245376.png" alt="cliv-v1" width="312px" height="auto">
|
vihangd/hindi-dolly-alpaca-lora-7b
|
vihangd
| 2023-04-30T02:09:44Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-04-30T01:57:51Z |
---
license: other
---
# Hugging Face Model - Hindi Finetuned
This repository contains a Hugging Face model that has been fine-tuned on a Hindi dataset. The model uses the `peft` library for generating responses.
## Usage
To use the model, first import the necessary libraries:
```python
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
```
Next, load the tokenizer and model:
```python
tokenizer = LlamaTokenizer.from_pretrained("yahma/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"yahma/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
```
Then, load the `PeftModel` with the specified pre-trained model and path to the peft model:
```python
model = PeftModel.from_pretrained(model, "./hindi-dolly-alpaca-lora-7b")
```
Next, define a function to generate a prompt:
```python
def generate_prompt(instruction, input=None):
if input:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
Finally, define a function to evaluate the model:
```python
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
num_beams=4,
)
def evaluate(model, instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("Response:", output.split("### Response:")[1].strip())
instruct =input("Instruction: ")
evaluate(model, instruct)
```
To generate a response, simply run the `evaluate` function with an instruction and optional input:
```python
instruct = "Write a response that appropriately completes the request."
input = "This is a sample input."
evaluate(model, instruct, input)
```
This will output a response that completes the request.
|
tsumeone/stable-vicuna-13B-4bit-128g-cuda
|
tsumeone
| 2023-04-30T01:46:47Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-30T01:05:24Z |
Quantized version of this: https://huggingface.co/TheBloke/stable-vicuna-13B-HF
Big thank you to TheBloke for uploading the HF version above. Unfortunately, his GPTQ quant doesn't run on 0cc4m's fork of KAI/GPTQ so I am uploading one that does.
GPTQ quantization using https://github.com/0cc4m/GPTQ-for-LLaMa for compatibility with 0cc4m's fork of KoboldAI.
Command used to quantize:
```python llama.py c:\stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors```
This model works best with the following prompting. Also, it really does not like to stop on its own and will likely keep going on forever if you let it.
```
### Human:
What is 2+2?
### Assistant:
```
|
AlekseyKorshuk/llama-7b-chatml
|
AlekseyKorshuk
| 2023-04-30T01:35:10Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T15:53:42Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llama-7b-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-chatml
This model is a fine-tuned version of [zpn/llama-7b](https://huggingface.co/zpn/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7373
- Accuracy: 0.2687
- Entropy: 0.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 0.6533 | 1.0 | 817 | 0.7036 | 0.2683 | 0.7874 |
| 0.4956 | 2.0 | 1634 | 0.7373 | 0.2687 | 0.6897 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0-rc1
- Datasets 2.10.1
- Tokenizers 0.13.3
|
dimitriz/greek-media-bert-base-uncased
|
dimitriz
| 2023-04-30T00:59:30Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"feature-extraction",
"text",
"language-modeling",
"pretraining",
"greek-media",
"domain-adaptation",
"fill-mask",
"el",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-02T08:53:19Z |
---
language:
- el
tags:
- text
- language-modeling
- bert
- pretraining
- greek-media
- domain-adaptation
pipeline_tag: fill-mask
metrics:
- accuracy
model-index:
- name: greek-media-bert-base-uncased
results: []
---
# Greek Media BERT (uncased)
This model is a domain-adapted version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on Greek media centric data.
## Model description
Details will be updated soon.
## Intended uses & limitations
Details will be updated soon.
## Training and evaluation data
Details will be updated soon.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
Details will be updated soon.
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu116
- Tensorflow 2.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
### Citation
The model has been officially released with the article "PIMA: Parameter-shared Intelligent Media Analytics Framework for Low Resource Language.
Dimitrios Zaikis, Nikolaos Stylianou and Ioannis Vlahavas.
In the Special Issue: New Techniques of Machine Learning and Deep Learning in Text Classification, Applied Sciences Journal.
2023" (https://www.mdpi.com/2174928).
If you use the model, please cite the following:
```bibtex
@Article{app13053265,
AUTHOR = {Zaikis, Dimitrios and Stylianou, Nikolaos and Vlahavas, Ioannis},
TITLE = {PIMA: Parameter-Shared Intelligent Media Analytics Framework for Low Resource Languages},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {5},
ARTICLE-NUMBER = {3265},
URL = {https://www.mdpi.com/2076-3417/13/5/3265},
ISSN = {2076-3417},
DOI = {10.3390/app13053265}
}
```
|
EinsZwo/en-to-de_longcontext
|
EinsZwo
| 2023-04-30T00:32:00Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T23:51:03Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: EinsZwo/en-to-de_longcontext
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EinsZwo/en-to-de_longcontext
This model is a fine-tuned version of [EinsZwo/en-to-de_foursentcontext](https://huggingface.co/EinsZwo/en-to-de_foursentcontext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2164
- Validation Loss: 1.3578
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2241, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3692 | 1.3708 | 0 |
| 1.2695 | 1.3627 | 1 |
| 1.2164 | 1.3578 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jtamph/Verisi
|
jtamph
| 2023-04-30T00:29:34Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-30T00:24:21Z |
---
license: creativeml-openrail-m
---
|
ange0102/Reinforce-CartPole-v1
|
ange0102
| 2023-04-30T00:16:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-30T00:16:21Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bouim/whisper-small-ar-12hrsdarijadata-April29
|
bouim
| 2023-04-30T00:00:35Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-29T21:19:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ar-12hrsdarijadata-April29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar-12hrsdarijadata-April29
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9985
- Wer: 77.7026
- Cer: 48.3376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.8036 | 0.38 | 250 | 1.4314 | 92.8372 | 55.4587 |
| 1.3528 | 0.75 | 500 | 1.1339 | 79.2413 | 48.2563 |
| 1.1316 | 1.13 | 750 | 1.0272 | 76.8802 | 49.5272 |
| 1.1439 | 1.51 | 1000 | 0.9985 | 77.7026 | 48.3376 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.1.dev0
- Tokenizers 0.13.3
|
worsty/ppo-SnowballTarget2
|
worsty
| 2023-04-29T23:58:48Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-04-29T23:58:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: worsty/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sqllama/lora-sql-context-dono
|
sqllama
| 2023-04-29T23:38:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-04-29T23:34:17Z |
## Setup Notes
For this model, a VM with 2 T4 GPUs was used.
To get the training to work on the 2 GPUs (utilize both GPUS simultaneously), the following command was used to initiate training.
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'b-mc2/sql-create-context' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 16
Note 1. Micro batch size was increased from the default 4 to 16. Note that increasing it further is possible based on other training that has been performed. This was a first attempt.
Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository.
## Log
(sqltest) chrisdono@deep-learning-duo-t4-3:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/lla$
a-7b-hf' --data_path 'b-mc2/sql-create-context' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 16
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your appli
cation as needed.
*****************************************
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
s...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
/opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/sqltest did not contain libcudart.so as expected! Searching further path
s...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /opt/conda/envs/sqltest/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
Training Alpaca-LoRA model with params:
base_model: decapoda-research/llama-7b-hf
data_path: b-mc2/sql-create-context
output_dir: ./lora-alpaca
batch_size: 128
micro_batch_size: 16
num_epochs: 1
learning_rate: 0.0003
cutoff_len: 256
val_set_size: 2000
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: ['q_proj', 'v_proj']
train_on_inputs: True
add_eos_token: False
group_by_length: False
wandb_project:
wandb_run_name:
wandb_watch:
wandb_log_model:
resume_from_checkpoint: False
prompt template: alpaca
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [01:24<00:00, 2.57s/it]
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e
233e6e)
0%| | 0/1 [00:00<?, ?it/s]
Found cached dataset json (/home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e
233e6e)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9.30it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7.83it/s]
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199
Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b
2dd7af1cf934bed8e233e6e/cache-5a5ac0bd39fc20e0.arrow and /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb5
50cf8797e5a6b2dd7af1cf934bed8e233e6e/cache-782fec259d4b8f6a.arrow
Loading cached split indices for dataset at /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b
2dd7af1cf934bed8e233e6e/cache-5a5ac0bd39fc20e0.arrow and /home/chrisdono/.cache/huggingface/datasets/b-mc2___json/b-mc2--sql-create-context-d62c31544f758e00/0.0.0/fe5dd6ea2639a6df622901539cb5
50cf8797e5a6b2dd7af1cf934bed8e233e6e/cache-782fec259d4b8f6a.arrow
{'loss': 2.7003, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.02}
{'loss': 2.566, 'learning_rate': 5.9999999999999995e-05, 'epoch': 0.03}
{'loss': 2.2648, 'learning_rate': 8.999999999999999e-05, 'epoch': 0.05}
{'loss': 1.657, 'learning_rate': 0.00011099999999999999, 'epoch': 0.07}
{'loss': 1.1599, 'learning_rate': 0.00014099999999999998, 'epoch': 0.08}
{'loss': 0.9037, 'learning_rate': 0.00017099999999999998, 'epoch': 0.1}
{'loss': 0.8137, 'learning_rate': 0.000201, 'epoch': 0.12}
{'loss': 0.7827, 'learning_rate': 0.00023099999999999998, 'epoch': 0.13}
{'loss': 0.7554, 'learning_rate': 0.000261, 'epoch': 0.15}
{'loss': 0.7357, 'learning_rate': 0.00029099999999999997, 'epoch': 0.17}
{'loss': 0.6893, 'learning_rate': 0.0002957831325301205, 'epoch': 0.18}
{'loss': 0.6606, 'learning_rate': 0.00028975903614457827, 'epoch': 0.2}
{'loss': 0.6506, 'learning_rate': 0.0002837349397590361, 'epoch': 0.22}
{'loss': 0.6462, 'learning_rate': 0.00027771084337349395, 'epoch': 0.23} [215/1857]
{'loss': 0.6315, 'learning_rate': 0.0002716867469879518, 'epoch': 0.25}
{'loss': 0.6337, 'learning_rate': 0.0002656626506024096, 'epoch': 0.27}
{'loss': 0.6223, 'learning_rate': 0.00025963855421686746, 'epoch': 0.28}
{'loss': 0.6136, 'learning_rate': 0.00025361445783132525, 'epoch': 0.3}
{'loss': 0.6198, 'learning_rate': 0.00024759036144578314, 'epoch': 0.32}
{'loss': 0.6084, 'learning_rate': 0.00024156626506024095, 'epoch': 0.33}
{'eval_loss': 0.608456552028656, 'eval_runtime': 123.856, 'eval_samples_per_second': 16.148, 'eval_steps_per_second': 1.009, 'epoch': 0.33}
{'loss': 0.6021, 'learning_rate': 0.00023554216867469876, 'epoch': 0.35}
{'loss': 0.5949, 'learning_rate': 0.0002295180722891566, 'epoch': 0.37}
{'loss': 0.5972, 'learning_rate': 0.00022349397590361444, 'epoch': 0.38}
{'loss': 0.5922, 'learning_rate': 0.00021746987951807228, 'epoch': 0.4}
{'loss': 0.5876, 'learning_rate': 0.0002114457831325301, 'epoch': 0.42}
{'loss': 0.5788, 'learning_rate': 0.00020542168674698793, 'epoch': 0.43}
{'loss': 0.5894, 'learning_rate': 0.0001993975903614458, 'epoch': 0.45}
{'loss': 0.5877, 'learning_rate': 0.0001933734939759036, 'epoch': 0.47}
{'loss': 0.5835, 'learning_rate': 0.00018734939759036142, 'epoch': 0.48}
{'loss': 0.5791, 'learning_rate': 0.00018132530120481925, 'epoch': 0.5}
{'loss': 0.5841, 'learning_rate': 0.00017530120481927712, 'epoch': 0.52}
{'loss': 0.5728, 'learning_rate': 0.00016927710843373493, 'epoch': 0.53}
{'loss': 0.569, 'learning_rate': 0.00016325301204819274, 'epoch': 0.55}
{'loss': 0.5709, 'learning_rate': 0.00015722891566265058, 'epoch': 0.57}
{'loss': 0.5762, 'learning_rate': 0.00015120481927710845, 'epoch': 0.58}
{'loss': 0.5704, 'learning_rate': 0.00014518072289156626, 'epoch': 0.6}
{'loss': 0.5661, 'learning_rate': 0.0001391566265060241, 'epoch': 0.62}
{'loss': 0.5662, 'learning_rate': 0.00013313253012048193, 'epoch': 0.63}
{'loss': 0.5674, 'learning_rate': 0.00012710843373493975, 'epoch': 0.65}
{'loss': 0.5635, 'learning_rate': 0.00012108433734939758, 'epoch': 0.67}
{'eval_loss': 0.568750262260437, 'eval_runtime': 122.9061, 'eval_samples_per_second': 16.273, 'eval_steps_per_second': 1.017, 'epoch': 0.67}
{'loss': 0.5609, 'learning_rate': 0.00011506024096385541, 'epoch': 0.69}
{'loss': 0.5724, 'learning_rate': 0.00010903614457831325, 'epoch': 0.7}
{'loss': 0.5603, 'learning_rate': 0.00010301204819277107, 'epoch': 0.72}
{'loss': 0.5599, 'learning_rate': 9.698795180722891e-05, 'epoch': 0.74}
{'loss': 0.5655, 'learning_rate': 9.096385542168674e-05, 'epoch': 0.75}
{'loss': 0.5578, 'learning_rate': 8.493975903614457e-05, 'epoch': 0.77}
{'loss': 0.5577, 'learning_rate': 7.89156626506024e-05, 'epoch': 0.79}
{'loss': 0.5606, 'learning_rate': 7.289156626506024e-05, 'epoch': 0.8}
{'loss': 0.5496, 'learning_rate': 6.686746987951806e-05, 'epoch': 0.82}
{'loss': 0.5635, 'learning_rate': 6.08433734939759e-05, 'epoch': 0.84}
{'loss': 0.5522, 'learning_rate': 5.481927710843373e-05, 'epoch': 0.85}
{'loss': 0.5572, 'learning_rate': 4.879518072289156e-05, 'epoch': 0.87}
{'loss': 0.5454, 'learning_rate': 4.2771084337349395e-05, 'epoch': 0.89}
{'loss': 0.5485, 'learning_rate': 3.6746987951807227e-05, 'epoch': 0.9}
{'loss': 0.5592, 'learning_rate': 3.072289156626506e-05, 'epoch': 0.92}
{'loss': 0.5499, 'learning_rate': 2.469879518072289e-05, 'epoch': 0.94}
{'loss': 0.55, 'learning_rate': 1.867469879518072e-05, 'epoch': 0.95}
{'loss': 0.5511, 'learning_rate': 1.2650602409638553e-05, 'epoch': 0.97}
{'loss': 0.5531, 'learning_rate': 6.626506024096385e-06, 'epoch': 0.99}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 598/598 [4:45:30<00:00, 27.59s/it]
{'train_runtime': 17131.1027, 'train_samples_per_second': 4.47, 'train_steps_per_second': 0.035, 'train_loss': 0.7246327424129116, 'epoch': 1.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 598/598 [4:45:30<00:00, 28.65s/it]
|
nolanaatama/stlslxrswhls
|
nolanaatama
| 2023-04-29T23:28:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T23:19:00Z |
---
license: creativeml-openrail-m
---
|
nergaldarski/mistoonSapphire
|
nergaldarski
| 2023-04-29T23:25:45Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-04-29T23:12:10Z |
CivitAI: https://civitai.com/models/32022/mistoonsapphire
|
MyneFactory/MF-AscendanceOfABookworm
|
MyneFactory
| 2023-04-29T23:02:44Z | 169 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"anime",
"aiart",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-23T01:01:54Z |
---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- anime
- aiart
---
<div style="padding-bottom: 40px">
<!--Logo-->
<div style="text-align: center;">
<img src="https://logo.mynefactory.ai/MF-AscendanceOfABookworm" alt="Myne Factory Logo" style="width:100%;">
</div>
<!--Table of contents-->
<div style="font-size: 14px; padding: 4px 8px; display: flex; justify-content: space-around; color: black; font-weight: 500;">
<a href="#model-info" style="text-decoration: none; color: #204F8F">Model Info</a> |
<a href="#recsettings" style="text-decoration: none; color: #204F8F"">Recommmended Settings</a> |
<a href="#promptformat" style="text-decoration: none; color: #204F8F"">Prompt Format</a> |
<a href="#examples" style="text-decoration: none; color: #204F8F"">Examples</a> |
<a href="#mynelinks" style="text-decoration: none; color: #204F8F"">Socials</a>
</div>
</div>
<!--Title-->
<div style="text-align: center; display: flex; flex-direction: column; padding-bottom: 10px;">
<h1 style=" font-size:38px; padding:2px; margin:20px 0 0 0">Ascendance Of A Bookworm</h1>
<span style=" font-size:18px; padding:2px; margin:5px 0 0 0">I'll Stop at Nothing to Become a Librarian</span>
</div>
<!--Example shortcuts-->
<div style="display: flex; align-items:top; justify-content:space-around; align-items: center; padding: 0px 70px;">
<a href="#example1" style="padding:10px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46321-3363481787-MFB%20style%3B%20Myne%3B%20southern%20side%3B%20face%3B.png" style="margin:0"/>
</a>
<a href="#example2" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/2.jpg" style="margin:0"/>
</a>
<a href="#example3" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/3.jpg" style="margin:0"/>
</a>
</div>
<div style="display: flex; align-items:top; justify-content:space-around; align-items: center; padding-bottom: 0px;">
<a href="#example4" style="padding:0px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/4.jpg" style="margin:0"/>
</a>
<a href="#example5" style="padding:20px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/5.jpg" style="margin:0"/>
</a>
<a href="#example7" style="padding:0px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/7.jpg" style="margin:0"/>
</a>
</div>
<div style="display: flex; align-items:top; justify-content:space-around; align-items: center; padding-bottom: 0px;">
<a href="#example8" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/8.jpg" style="margin:0"/>
</a>
<a href="#example10" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/10.jpg" style="margin:0"/>
</a>
<a href="#example11" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/11.jpg" style="margin:0"/>
</a>
</div>
<div style="display: flex; align-items:top; justify-content:space-around; align-items: center; padding: 0px 70px;">
<a href="#example12" style="padding:10px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/12.jpg" style="margin:0"/>
</a>
<a href="#example13" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/13.jpg" style="margin:0"/>
</a>
<a href="#example14" style="padding:5px">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/jpgs/14.jpg" style="margin:0"/>
</a>
</div>
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/tree/main/Example%20pictures/V2">All example images</a>
<!--Model Info-->
<div style="padding:10px; margin: 40px 0; background-color: #f9f9f9; box-shadow: 0 4px 6px rgba(0,0,0,0.1);" id="model-info">
<h2 style="font-size:28px; font-family: Arial, Helvetica, sans-serif; font-weight: bold; margin:0; color: #222;">Model Info</h2>
<p>
<div style="font-size: 18px; color: #666;"><strong>Downloads: </strong></div>
<a style="color: #333; display: block;" href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/MF-AscendanceOfABookworm_V2_T3.1.ckpt">MF-AscendanceOfABookworm_V2_T3.1.ckpt (2.13 GB)</a>
<a style="color: #333; display: block;" href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/MF-AscendanceOfABookworm_V2_T3.1.safetensors">MF-AscendanceOfABookworm_V2_T3.1.safetensors (2.13 GB)</a>
</p>
<!-- Technical details start here -->
<!-- Technical details end here -->
<p style="font-size: 18px; color: #666;">
<strong>Authors: </strong><span>Juusoz, Tylenol, Goldkoron and Expl0dingCat</span>
<div>Feel free to join our <a href="https://discord.gg/GdJBzaTSCF">Discord</a> community for updates on current models or models on other shows.</p></div>
</p>
</div>
<!--Prompt format-->
<div style="padding:10px; margin: 40px 0; background-color: #f9f9f9; box-shadow: 0 4px 6px rgba(0,0,0,0.1);" id="promptformat">
<h2 style="font-size:28px; font-family: Arial, Helvetica, sans-serif; font-weight: bold; margin:0; color: #222;">Prompt Format</h2>
<i style="color: #666666e8; padding-left:8px;">
<div>The prompts we trained on are available in the corresponding T{version} prompt list.txt file.</div>
<a style="color: #333" href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/T3.2 Prompt list.txt">T3.2 Prompt list.txt</a>
</i>
<div style="padding:8px">
<strong style="font-size: 16px; color: #333;">Format:</strong>
<code style="font-size: 14px; padding: 6px; line-height: 22px; background-color: #f5f5f5; border-radius: 4px; color: #000;">Parent tag, description prompt 1, description prompt 2, etc;</code>
</div>
<p style="font-size: 18px; color: #666;">
The parent tag serves as the primary label for the overall theme of the image and acts as the main subject. The description prompts are a comprehensive list of attributes, objects, actions, and other elements that describe the subject in detail. Each subject is separated by a semicolon (;) while the individual attributes, objects, actions, and elements are separated by a comma (,).
</p>
<h3>Example:</h3>
<div style="padding:8px">
<strong style="font-size: 16px; color: #333;">Positive prompts:</strong>
<code style="font-size: 14px; padding: 6px; line-height: 22px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1boy; 1girl; Lutz; Myne, poor, yellow eyes, blue hair, hairstick; (masterpiece); detailed; reflective; depth of field; 8k; photo;</code>
</div>
<div style="padding:8px">
<strong style="font-size: 16px; color: #333;">Negative prompts:</strong>
<code style="font-size: 14px; padding: 6px; line-height: 22px; background-color: #f5f5f5; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, solo</code>
</div>
<h3>Style tags:</h3>
<div>You can specify what style the image should generate in using the following tags.</div>
<div style="padding:8px">
<strong style="font-size: 16px; color: #333;">MyneFactoryBase style, for more artistic pictures</strong>
<code style="font-size: 14px; padding: 6px; line-height: 22px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style</code>
</div>
<div style="padding:8px">
<strong style="font-size: 16px; color: #333;">Ascendance of a Bookworm style, for more anime accurate pictures</strong>
<code style="font-size: 14px; padding: 6px; line-height: 22px; background-color: #f5f5f5; border-radius: 4px; color: #000;">AOB Style</code>
</div>
<p style="font-size: 18px; color: #666;">Just because we haven’t trained on something doesn’t mean the base AI model doesn’t already know what it is, so go crazy.</p>
</div>
<!--Recommmended settings-->
<div style="padding:10px; margin: 40px 0; background-color: #f9f9f9; box-shadow: 0 4px 6px rgba(0,0,0,0.1);" id="recsettings">
<h2 style="font-size:28px; font-family: Arial, Helvetica, sans-serif; font-weight: bold; margin:0; color: #222;">Recommended Settings</h2>
<ul style="list-style-type: none; padding: 0 8px">
<li style="margin-bottom: 10px;">
<span style="display: inline-block; padding-right: 20px; font-weight: 900; color: #333;">Sampling Steps</span>
<span style="color: #666; padding-left:8px;"><strong>40</strong></span>
</li>
<li style="margin-bottom: 10px;">
<span style="display: inline-block; padding-right: 20px; font-weight: 900; color: #333;">Sampler</span>
<span style="color: #666; padding-left:8px;"><strong>DPM++ SDE Karras</strong></span>
</li>
<li style="margin-bottom: 10px;">
<span style="display: inline-block; padding-right: 20px; font-weight: 900; color: #333;">Image Size</span>
<span style="color: #666; padding-left:8px;">At least <strong>768x768</strong></span>
</li>
<li style="margin-bottom: 10px;">
<span style="display: inline-block; padding-right: 20px; font-weight: 900; color: #333;">CFG</span>
<span style="color: #666; padding-left:8px;"><strong>7</strong></span>
</li>
<li style="margin-bottom: 10px;">
<span style="display: inline-block; padding-right: 20px; font-weight: 900; color: #333;">Clip skip</span>
<span style="color: #666; padding-left:8px;"><strong>1</strong> (settings, Clip skip)</span>
</li>
</ul>
</div>
<!--Examples-->
<div style="padding:10px; margin: 40px 0; background-color: #f9f9f9; box-shadow: 0 4px 6px rgba(0,0,0,0.1);"" id="examples">
<h2 style="font-size:28px; font-family: Arial, Helvetica, sans-serif; font-weight: bold; margin:0; color: #222;">Examples</h2>
<div style="display: flex; flex-wrap: wrap; justify-content: center;">
<!--Example 1-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example1">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/46321-3363481787-MFB%20style%3B%20Myne%3B%20southern%20side%3B%20face%3B.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46321-3363481787-MFB%20style%3B%20Myne%3B%20southern%20side%3B%20face%3B.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB style; Myne; southern side; face;</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, (brown hair)</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">3363481787</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 2-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example2">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47539-2555971430-MFB%20Style%3B%201girl%3B%20Effa%2C%20green%20cloak%2C%20coat%2C%20happy%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20home%2C%20kitchen%3B.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47539-2555971430-MFB%20Style%3B%201girl%3B%20Effa%2C%20green%20cloak%2C%20coat%2C%20happy%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20home%2C%20kitchen%3B.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1girl; Effa, green cloak, coat, happy; (masterpiece); detailed; reflective; depth of field; 8k; photo; home, kitchen; (solo)</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, twins</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">2555971430</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1088x1280</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 3-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example3">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/46358-2191070077-MFB%20style%3B%20Tuuli%2C%20blue%20baptism%20dress%2C%20flower%20hairstick%3B%20northern%20side%2C%20temple%3B%20baptism%20ceremony.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46358-2191070077-MFB%20style%3B%20Tuuli%2C%20blue%20baptism%20dress%2C%20flower%20hairstick%3B%20northern%20side%2C%20temple%3B%20baptism%20ceremony.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB style; Tuuli, blue baptism dress, flower hairstick; northern side, temple; baptism ceremony</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, (brown hair)</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">2191070077</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 4-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example4">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47427-3644149238-MFB%20Style%3B%201girl%3B%20Freida%2C%20formal%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20Freida's%20estate%2C%20(solo).png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47427-3644149238-MFB%20Style%3B%201girl%3B%20Freida%2C%20formal%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20Freida's%20estate%2C%20(solo).png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1girl; Freida, formal; (masterpiece); detailed; reflective; depth of field; 8k; photo; Freida's estate, (solo)</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, solo, twins</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">3644149238</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1472x1088</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 5-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example5">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/46760-1125709576-MFB%20Style%3B%20Lutz%2C%20smooth%20hair%2C%20formal%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20northern%20side%3B.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46760-1125709576-MFB%20Style%3B%20Lutz%2C%20smooth%20hair%2C%20formal%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20northern%20side%3B.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; Lutz, smooth hair, formal; masterpiece; detailed; reflective; depth of field; 8k; photo; northern side;</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1125709576</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 7-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example7">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/46569-3833015024-MFB%20Style%3B%20Myne%2C%20blue%20baptism%20dress%2C%20white%20flower%20hairstick%2C%20blue%20hair%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20(.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46569-3833015024-MFB%20Style%3B%20Myne%2C%20blue%20baptism%20dress%2C%20white%20flower%20hairstick%2C%20blue%20hair%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20(.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; Myne, blue baptism dress, white flower hairstick, blue hair; masterpiece; detailed; reflective; depth of field; 8k; (solo); photo; temple;</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ 2S a Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">3833015024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1280x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 8-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example8">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47409-581812665-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20(poor)%2C%20yellow%20eyes%2C%20dark%20blue%20hair%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47409-581812665-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20(poor)%2C%20yellow%20eyes%2C%20dark%20blue%20hair%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1boy; 1girl; Lutz; Myne, (poor), yellow eyes, dark blue hair; (masterpiece); detailed; reflective; depth of field; 8k; photo; northern side</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, solo, twins</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ 2S a Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">581812665</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1472x1088</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 10-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example10">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/46628-2037890935-MFB%20Style%3B%20Tuuli%2C%20blue%20baptism%20dress%2C%20flower%20hairstick%2C%20blue%20hair%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/46628-2037890935-MFB%20Style%3B%20Tuuli%2C%20blue%20baptism%20dress%2C%20flower%20hairstick%2C%20blue%20hair%3B%20masterpiece%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; Tuuli, blue baptism dress, flower hairstick, blue hair; masterpiece; detailed; reflective; depth of field; 8k; photo; northern side; baptism ceremony; petals; angel wings; halo; night</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">2037890935</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 11-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example11">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47428-3644149239-MFB%20Style%3B%201girl%3B%20Freida%2C%20formal%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20Freida's%20estate%2C%20(solo).png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47428-3644149239-MFB%20Style%3B%201girl%3B%20Freida%2C%20formal%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field%3B%208k%3B%20photo%3B%20Freida's%20estate%2C%20(solo).png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1girl; Freida, formal; (masterpiece); detailed; reflective; depth of field; 8k; photo; Freida's estate, (solo)</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, solo, twins</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">3644149239</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1472x1088</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 12-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example12">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47324-905950623-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20poor%2C%20yellow%20eyes%2C%20blue%20hair%2C%20hairpin%3B%20cauldron%3B%20sticks%3B%20(masterpiece)%3B%20detailed%3B%20reflective.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47324-905950623-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20poor%2C%20yellow%20eyes%2C%20blue%20hair%2C%20hairpin%3B%20cauldron%3B%20sticks%3B%20(masterpiece)%3B%20detailed%3B%20reflective.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1boy; 1girl; Lutz; Myne, poor, yellow eyes, blue hair, hairpin; cauldron; sticks; (masterpiece); detailed; reflective; depth of field; 8k; photo; forest, river;</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">905950623</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 13-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example13">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47386-4048767915-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20poor%2C%20yellow%20eyes%2C%20blue%20hair%2C%20hairstick%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47386-4048767915-MFB%20Style%3B%201boy%3B%201girl%3B%20Lutz%3B%20Myne%2C%20poor%2C%20yellow%20eyes%2C%20blue%20hair%2C%20hairstick%3B%20(masterpiece)%3B%20detailed%3B%20reflective%3B%20depth%20of%20field.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">MFB Style; 1boy; 1girl; Lutz; Myne, poor, yellow eyes, blue hair, hairstick; (masterpiece); detailed; reflective; depth of field; 8k; photo;</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">(bad hands, error), bad anatomy, low quality, worst quality, watermark, signature, poorly drawn face, noise, grain, nude, naked, upskirt, blind, AOB Style, solo</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">4048767915</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1600x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
<!--Example 14-->
<div style="padding: 20px; width: 100%; text-align: center;" id="example14">
<a href="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/blob/main/Example%20pictures/V2/47678-904033739-AOB%20Style%3B%201boy%3B%201girl%3B%20Myne%2C%20poor%3B%20Lutz%2C%20happy%2C%20eyes%20closed%3B%20southern%20side.png">
<img src="https://huggingface.co/MyneFactory/MF-AscendanceOfABookworm/resolve/main/Example%20pictures/V2/47678-904033739-AOB%20Style%3B%201boy%3B%201girl%3B%20Myne%2C%20poor%3B%20Lutz%2C%20happy%2C%20eyes%20closed%3B%20southern%20side.png" style="width: 100%; margin: 0px 0; border: 1px solid #ddd;" />
</a>
<div>
<div style="display: flex; flex-direction: column; text-align: left;">
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #333; display: block;">Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f5; border-radius: 4px; color: #000;">AOB Style; 1boy; 1girl; Myne, poor; Lutz, happy, eyes closed; southern side</code>
</div>
<div style="padding: 4px 2px;">
<strong style="font-size: 16px; color: #696969; display: block;">Negative Prompt:</strong>
<code style="font-size: 14px; background-color: #f5f5f59d; border-radius: 4px; color: #000;">bad hands, bad anatomy, nude, naked, MFB Style</code>
</div>
</div>
</div>
<div style="padding:10px 40px">
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Steps:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">40</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Sampler:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">DPM++ SDE Karras</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">CFG Scale:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">7</code>
</div>
</div>
<div style="display: flex; flex-wrap: wrap; justify-content: space-between;">
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Seed:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">904033739</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Size:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">1024x1024</code>
</div>
<div style="padding:4px">
<strong style="font-size: 16px; color: #333;">Model:</strong>
<code style="font-size: 14px; padding-left: 6px; background-color: #f5f5f5; border-radius: 4px; color: #000;">last-AOB150ep-ep01-gs26690</code>
</div>
</div>
</div>
</div>
</div>
</div>
<!--Links-->
<div style="padding: 10px 0; text-align: center; font-size: 18px;" id="mynelinks">
<h2 style="font-size:28px; font-family: Arial, Helvetica, sans-serif; font-weight: bold; margin:0;">Socials</h2>
<a href="https://mynefactory.ai" style="text-decoration: none; color: #0077c9;">Website</a> |
<a href="https://discord.gg/GdJBzaTSCF" style="text-decoration: none; color: #0077c9;">Discord</a> |
<a href="https://www.patreon.com/user?u=36154428" style="text-decoration: none; color: #0077c9;">Patreon</a> |
<a href="https://civitai.com/user/MyneFactory" style="text-decoration: none; color: #0077c9;">CivitAI</a>
</div>
|
Rirou360/test
|
Rirou360
| 2023-04-29T22:56:56Z | 0 | 0 |
transformers
|
[
"transformers",
"biology",
"medical",
"code",
"text-generation",
"fr",
"dataset:bigscience-historical-texts/Open_Medieval_French",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T21:20:05Z |
---
license: openrail
datasets:
- bigscience-historical-texts/Open_Medieval_French
language:
- fr
library_name: transformers
tags:
- biology
- medical
- code
metrics:
- Perplexity: 32.4
- FID: 12.5
- BLEU: 0.87
pipeline_tag: text-generation
---
|
andyleow/poca-SoccerTwos
|
andyleow
| 2023-04-29T22:44:35Z | 40 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-29T22:38:21Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: andyleow/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ZyXin/Reinforce-CartPole-v1
|
ZyXin
| 2023-04-29T21:34:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T21:34:36Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dhnanjay/dj-lora-dolly_v1
|
dhnanjay
| 2023-04-29T21:29:16Z | 4 | 0 |
transformers
|
[
"transformers",
"gptj",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T21:09:47Z |
---
language:
- en
---
This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
|
ethzanalytics/dolly-v2-12b-sharded-8bit
|
ethzanalytics
| 2023-04-29T21:12:13Z | 6 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dolly",
"dolly-v2",
"instruct",
"sharded",
"8bit",
"quantized",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2023-04-24T21:10:55Z |
---
license: mit
datasets:
- databricks/databricks-dolly-15k
language:
- en
pipeline_tag: text-generation
tags:
- dolly
- dolly-v2
- instruct
- sharded
- 8bit
- quantized
inference: false
---
# dolly-v2-12b: sharded **8bit** checkpoint
<a href="https://colab.research.google.com/gist/pszemraj/1bc9cea67e6c8dc450b868e0cfc18163/dolly-v2-12b-8bit-inference.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This is a sharded checkpoint (with ~4GB shards) of the `databricks/dolly-v2-12b` model **in `8bit` precision** using `bitsandbytes`.
Refer to the [original model](https://huggingface.co/databricks/dolly-v2-12b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
- total model size is only ~12.5 GB!
- this enables low-RAM loading, i.e. Colab :)
- **update**: generation speed can be greatly improved by setting `use_cache=True` and generating via contrastive search. [example notenook here](https://colab.research.google.com/gist/pszemraj/12c832952c88d77f6924c0718a2d257d/dolly-v2-12b-8bit-use_cache-bettertransformer.ipynb)
## Basic Usage
install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
```bash
pip install -U -q transformers bitsandbytes accelerate
```
Load the model. As it is serialized in 8bit you don't need to do anything special:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/dolly-v2-12b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
|
joelb/ClearVAE
|
joelb
| 2023-04-29T21:08:31Z | 25 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T00:47:33Z |
---
license: creativeml-openrail-m
---
|
Apv/Flaubert2904_v2
|
Apv
| 2023-04-29T20:55:44Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"flaubert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T20:44:28Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Apv/Flaubert2904_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apv/Flaubert2904_v2
This model is a fine-tuned version of [flaubert/flaubert_base_cased](https://huggingface.co/flaubert/flaubert_base_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0288
- Validation Loss: 1.0387
- Train Accuracy: 0.5407
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 755, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.2265 | 1.1301 | 0.5185 | 0 |
| 1.0377 | 1.0387 | 0.5407 | 1 |
| 1.0230 | 1.0387 | 0.5407 | 2 |
| 1.0235 | 1.0387 | 0.5407 | 3 |
| 1.0288 | 1.0387 | 0.5407 | 4 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
theastro/folks-diffusion-v1-5
|
theastro
| 2023-04-29T20:49:03Z | 40 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-29T20:31:04Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
library_name: diffusers
---
# Folks Diffusion v1-5 Model Card
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "theastro/folks-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of folks, like an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("a portrait image of folks")
```
*This model card was written by: Matheus Ranielli and is based on the [runwaynl model v1.5].*
|
wozeparrot/tinyrwkv-4-converted
|
wozeparrot
| 2023-04-29T20:38:37Z | 0 | 0 | null |
[
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-04-29T20:32:54Z |
---
language:
- en
tags:
- text-generation
---
Pre-converted [RWKV-4](https://github.com/BlinkDL/RWKV-LM) for use with [tinyrwkv](https://github.com/wozeparrot/tinyrwkv).
Both Pile pretrained and Raven finetuned up to 3b are here.
Only float32 for now.
|
4bit/stable-vicuna-13B-GPTQ
|
4bit
| 2023-04-29T20:36:43Z | 11 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"causal-lm",
"en",
"dataset:OpenAssistant/oasst1",
"dataset:nomic-ai/gpt4all_prompt_generations",
"dataset:tatsu-lab/alpaca",
"arxiv:2302.13971",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-04-29T20:32:21Z |
---
language:
- en
tags:
- causal-lm
- llama
license: cc-by-nc-sa-4.0
datasets:
- OpenAssistant/oasst1
- nomic-ai/gpt4all_prompt_generations
- tatsu-lab/alpaca
inference: false
---
# StableVicuna-13B-GPTQ
This repo contains 4bit GPTQ format quantised models of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
## PROMPT TEMPLATE
This model works best with the following prompt template:
```
### Human: your prompt here
### Assistant:
```
## How to easily download and use this model in text-generation-webui
Load text-generation-webui as you normally do.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter this repo name: `TheBloke/stable-vicuna-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. As this is a GPTQ model, fill in the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
6. Now click the **Refresh** icon next to **Model** in the top left.
7. In the **Model drop-down**: choose this model: `stable-vicuna-13B-GPTQ`.
8. Click **Reload the Model** in the top right.
9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
I have uploaded two versions of the GPTQ.
**Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors**
In the `main` branch - the default one - you will find `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
* `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
```
**Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors**
Created for more recent versions of GPTQ-for-LLaMa, and uses the `--act-order` flag for maximum theoretical performance.
To access this file, please switch to the `latest` branch fo this repo and download from there.
* `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
* Only works with recent GPTQ-for-LLaMa code
* **Does not** work with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. **act-order**.
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
```
## Manual instructions for `text-generation-webui`
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
# Original StableVicuna-13B model card
## Model Description
StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
## Model Details
* **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
* **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **Library**: [trlX](https://github.com/CarperAI/trlx)
* **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
* *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
* **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 13B |
| \\(d_\text{model}\\) | 5120 |
| \\(n_\text{layers}\\) | 40 |
| \\(n_\text{heads}\\) | 40 |
## Training
### Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
### Training Procedure
`CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
| Hyperparameter | Value |
|-------------------|---------|
| num_rollouts | 128 |
| chunk_size | 16 |
| ppo_epochs | 4 |
| init_kl_coef | 0.1 |
| target | 6 |
| horizon | 10000 |
| gamma | 1 |
| lam | 0.95 |
| cliprange | 0.2 |
| cliprange_value | 0.2 |
| vf_coef | 1.0 |
| scale_reward | None |
| cliprange_reward | 10 |
| generation_kwargs | |
| max_length | 512 |
| min_length | 48 |
| top_k | 0.0 |
| top_p | 1.0 |
| do_sample | True |
| temperature | 1.0 |
## Use and Limitations
### Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
### Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the support of [Stability AI](https://stability.ai/).
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```bibtex
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtex
@software{leandro_von_werra_2023_7790115,
author = {Leandro von Werra and
Alex Havrilla and
Max reciprocated and
Jonathan Tow and
Aman cat-state and
Duy V. Phung and
Louis Castricato and
Shahbuland Matiana and
Alan and
Ayush Thakur and
Alexey Bukhtiyarov and
aaronrmm and
Fabrizio Milo and
Daniel and
Daniel King and
Dong Shin and
Ethan Kim and
Justin Wei and
Manuel Romero and
Nicky Pochinkov and
Omar Sanseviero and
Reshinth Adithyan and
Sherman Siu and
Thomas Simonini and
Vladimir Blagojevic and
Xu Song and
Zack Witten and
alexandremuzio and
crumb},
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
Util, T5 ILQL, Tests}},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {v0.6.0},
doi = {10.5281/zenodo.7790115},
url = {https://doi.org/10.5281/zenodo.7790115}
}
```
|
email81227/ppo-LunarLander-v2-Unit8-part-I
|
email81227
| 2023-04-29T20:34:57Z | 0 | 0 | null |
[
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T20:34:54Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -126.32 +/- 89.07
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'test'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'email81227/ppo-LunarLander-v2-Unit8-part-I'
'batch_size': 512
'minibatch_size': 128}
```
|
OtherBrian/xlm-roberta-base-finetuned-panx-de
|
OtherBrian
| 2023-04-29T20:22:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-28T22:10:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8414824042354406
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1567
- F1: 0.8415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3594 | 1.0 | 191 | 0.1855 | 0.7971 |
| 0.1597 | 2.0 | 382 | 0.1544 | 0.8272 |
| 0.1003 | 3.0 | 573 | 0.1567 | 0.8415 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jackwheeler/dreambooth_Anni
|
jackwheeler
| 2023-04-29T20:05:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"license:openrail",
"region:us"
] | null | 2023-04-29T19:48:15Z |
---
license: openrail
library_name: diffusers
---
|
Udoy/bert-finetuned-ner
|
Udoy
| 2023-04-29T20:01:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-29T19:50:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9327828241123038
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9415687255147119
- name: Accuracy
type: accuracy
value: 0.9862247601106728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9328
- Recall: 0.9505
- F1: 0.9416
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0889 | 1.0 | 1756 | 0.0701 | 0.9189 | 0.9345 | 0.9267 | 0.9821 |
| 0.0339 | 2.0 | 3512 | 0.0670 | 0.9262 | 0.9461 | 0.9361 | 0.9854 |
| 0.0186 | 3.0 | 5268 | 0.0626 | 0.9328 | 0.9505 | 0.9416 | 0.9862 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yahiael1/test_bart_newsroom
|
Yahiael1
| 2023-04-29T19:41:53Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T19:00:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test_bart_newsroom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_bart_newsroom
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4137
- Rouge1: 0.1856
- Rouge2: 0.073
- Rougel: 0.1654
- Rougelsum: 0.1715
- Gen Len: 19.2987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 250 | 2.4246 | 0.1768 | 0.0794 | 0.1635 | 0.168 | 19.3148 |
| 2.7103 | 2.0 | 500 | 2.4137 | 0.1856 | 0.073 | 0.1654 | 0.1715 | 19.2987 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LChinese212/LegsUpMS
|
LChinese212
| 2023-04-29T19:38:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T19:37:06Z |
---
license: creativeml-openrail-m
---
|
LChinese212/fullnelsonms
|
LChinese212
| 2023-04-29T19:33:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T19:33:03Z |
---
license: creativeml-openrail-m
---
|
lmeninato/t5-small-codesearchnet-python3
|
lmeninato
| 2023-04-29T19:30:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T16:22:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-codesearchnet-python3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-codesearchnet-python3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1308
- Rouge1: 0.0046
- Rouge2: 0.0044
- Avg Length: 0.317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Avg Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:----------:|
| No log | 1.0 | 39 | 8.0395 | 0.1733 | 0.0997 | 18.4264 |
| No log | 2.0 | 78 | 0.3933 | 0.0 | 0.0 | 0.0004 |
| No log | 3.0 | 117 | 0.2376 | 0.0 | 0.0 | 0.0 |
| No log | 3.99 | 156 | 0.1693 | 0.0 | 0.0 | 0.0 |
| No log | 4.99 | 195 | 0.1308 | 0.0046 | 0.0044 | 0.317 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yegh/bert-base-german-uncased-finetuned-recipes
|
yegh
| 2023-04-29T19:16:05Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-04-29T17:16:25Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: yegh/bert-base-german-uncased-finetuned-recipes
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yegh/bert-base-german-uncased-finetuned-recipes
This model is a fine-tuned version of [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2342
- Validation Loss: 2.0627
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2342 | 2.0627 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
WeoKeo/q-FrozenLake-v1-4x4-noSlippery
|
WeoKeo
| 2023-04-29T19:05:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T15:00:12Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.73 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="WeoKeo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
WeoKeo/q-Taxi-v3
|
WeoKeo
| 2023-04-29T19:00:51Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T19:00:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="WeoKeo/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Aman0112/bert_emo_classifier
|
Aman0112
| 2023-04-29T18:51:28Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T17:56:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9319 | 0.25 | 500 | 0.4107 |
| 0.3265 | 0.5 | 1000 | 0.3068 |
| 0.2458 | 0.75 | 1500 | 0.2721 |
| 0.2487 | 1.0 | 2000 | 0.2313 |
| 0.158 | 1.25 | 2500 | 0.2422 |
| 0.1796 | 1.5 | 3000 | 0.2162 |
| 0.145 | 1.75 | 3500 | 0.1951 |
| 0.1648 | 2.0 | 4000 | 0.1908 |
| 0.1048 | 2.25 | 4500 | 0.2399 |
| 0.1171 | 2.5 | 5000 | 0.2230 |
| 0.1116 | 2.75 | 5500 | 0.2244 |
| 0.1122 | 3.0 | 6000 | 0.2250 |
| 0.0713 | 3.25 | 6500 | 0.2616 |
| 0.0697 | 3.5 | 7000 | 0.2672 |
| 0.0775 | 3.75 | 7500 | 0.2748 |
| 0.0742 | 4.0 | 8000 | 0.2724 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jrreda/rl_04_PixelCopter
|
jrreda
| 2023-04-29T18:44:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T18:44:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: rl_04_PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 12.90 +/- 11.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Muhammadbasabr/Aqua
|
Muhammadbasabr
| 2023-04-29T18:37:26Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-04-29T18:37:26Z |
---
license: bigscience-openrail-m
---
|
eimiss/EimisAnimeDiffusion_2.0v
|
eimiss
| 2023-04-29T18:24:48Z | 92 | 28 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-04-19T17:21:13Z |
---
thumbnail: https://i.imgur.com/vJLBNJf.png
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
license: creativeml-openrail-m
inference: true
---
# Diffusion model
This model is trained using base model as the previous version with way bigger dataset.<br>
There are two versions of it:<br>
EimisAnimeDiffusion_2-0 (original)<br>
EimisAnimeDiffusion_2-0_alternative (original + orangemix:0.2 + even bigger dataset).<br>
Read the end to choose the one you want the most.<br>
At the beginning all the examples will be using "EimisAnimeDiffusion_2-0". <br>
# Sample generations
Of course this model works well with anime style, magic, and a bunch of different effects. A couple of examples:<br>
```
Postitive:(1girl), sky, cloud, battle, armor, cape, boots, duel, scenery, outdoors, gloves, sunset, long hair, mountains, ice mountain
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 4027860244, Size: 1024x768
```
<img src=https://i.imgur.com/Pvykviv.png width=75% height=75%>
```
Positive:1girl, solo, water, blue hair, red eye, winter, village, magician, magic circle, medium breasts, snowing
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 4016818418
```
<img src=https://i.imgur.com/BLXctxZ.jpg width=75% height=75%>
```
Positive:1girl, solo, ahoge, bangs, blush, bridal gauntlets, capelet, closed mouth, crossed bangs, white long dress, final fantasy, winged capelet, yellow hair, hair band, hair between eyes, hair ornament, highres, jewelry, looking at viewer, extra short hair, beautiful detailed background, solo, upper body, shoulder wing, white gold theme, indoor, royal palace, glowing light, wind, flowers
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 2762179779
```
<img src=https://i.imgur.com/C3SDGCd.jpg width=75% height=75%>
```
Positive: 1girl, wavy hair, medium hair, magician, blue eyes, black hair, :d, (magic circle:1.2), (black coat), full body, (ancient ruins), (scenery), sky, outdoors, landscape, stars,
Negative: lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 477438759
```
<img src=https://i.imgur.com/sumnvfW.jpg width=75% height=75%>
# Scenery
```
Positive: moon, night, tree, scenery, sky, fantasy, cloud, moonlight, outdoors, castle, mountain, tower, forest, nature, house, bridge, building, gate, bush, grass, pagoda, water, field, cliff, full moon, night sky, star (sky), starry sky, bare tree, cloudy sky, (no humans), mountainous horizon, city
Negative: lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 561959925
```
<img src=https://i.imgur.com/gskGUSv.jpg width=75% height=75%>
```
Positive:cloud, scenery, sky, day, outdoors, grass, fantasy, landscape, mountain, (floating island:1.5), blue sky, cloudy sky, river, flowers
Negative: lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 222763192
```
<img src=https://i.imgur.com/fDbLPCB.jpg width=75% height=75%>
# Small comparison with v1
Right V2, Left V1.
```
Positive:bubble, rating:safe, underwater, jellyfish, 1girl, jacket, solo, bangs, boots, water, submerged, thighs, gloves, air bubble, bubble blowing, silver hair, very long hair, black footwear, thigh cutout, red eyes, long sleeves, black jacket, thigh strap, looking at viewer, bare shoulders, black gloves, hair between eyes, magic
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 4205949473
```
<img src=https://i.imgur.com/ie07l2V.png width=75% height=75%>
```
Positive:1girl, cloud, sky, solo, magic, clock, sunset, moon, outdoors, dress, tower, sun, frills, electricity, lips, blonde hair, cloudy sky, long hair, hair ornament, wavy hair, purple eyes, looking at viewer, fire, fire magic, fire effect, electricity
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 1143293364
```
<img src=https://i.imgur.com/FhSmXqs.png width=75% height=75%>
For more in depth testing between these two:<br>
Better face structures (eyes fixed)<br>
Higher resolution (new data was trained on 768x768 instead of 512x512)<br>
Better looking characters, animations, enviroment, effects and way more <br>
# Which model to choose
EimisAnimeDiffusion_2-0 is trained on smaller dataset, however it keeps the style better.<br>
It might be worse on some aspects like hardly getting specific prompts or some other small issues, however<br>
it has way better quality, effects and keeps the style I wanted way better.<br>
EimisAnimeDiffusion_2-0_alternative on the other hand understands better way more prompts (especially in comparison with some NSFW prompts).<br>
However, way worse with style, effects, details.<br>
Also sometimes not as smooth, some stuff be random, btu still really great alternative model. <br>
Example:<br>
Left normal, right alternative:<br>
```
Positive:1girl, solo, gloves, smile, tree, outdoors, :d, signature, sleeveless, skirt, breasts, hakama, bangs, fang, flower, petals, shirt, blush, standing, day, animal ears, long hair, open mouth, fox ears, cherry blossoms, japanese clothes, looking at viewer, black gloves, arm up, very long hair, bare shoulders, animal ear fluff, hakama skirt, medium breasts, cowboy shot, sleeveless shirt, grey hair, thick eyebrows, red eyes, half gloves
Negative:lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, flat, lowres, text, error, cropped, worst quality, low quality
Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 7, Seed: 518161897
```
<img src=https://i.imgur.com/UqDXt6X.png width=75% height=75%>
Might not be the best example, but original does have a bit more detail and more flying leaves.<br>
It is way mroe noticable with magic or element effects. Also with architecture and background in general.<br>
But it does understand better some characters and specific prompts. <br>
For example, Hatsune Miku:
<img src=https://i.imgur.com/ivXHVbR.png width=75% height=75%>
As you can see, the alternative way better on some prompts.
# Some more info
New datasets trained on clip skip 1, but clip skip 2 also works decently (not as crispy though).<br>
Orangemix model link that was used in the alternative:<br>
https://huggingface.co/WarriorMama777/OrangeMixs
|
rr3khan/ppo-LunarLander-v2
|
rr3khan
| 2023-04-29T18:09:52Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T18:09:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.35 +/- 17.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fathyshalab/reklambox-wasser-strom-gas-setfit
|
fathyshalab
| 2023-04-29T17:54:49Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T17:54:38Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-wasser-strom-gas-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-wasser-strom-gas-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
kriahana/newkdoll
|
kriahana
| 2023-04-29T17:21:29Z | 0 | 8 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-28T05:00:24Z |
---
license: creativeml-openrail-m
---
|
Ganu3010/ppo-LunarLander-v1
|
Ganu3010
| 2023-04-29T17:19:51Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T17:19:44Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -137.51 +/- 75.94
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ganu3010/ppo-LunarLander-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
jiaheillu/byheisexiong
|
jiaheillu
| 2023-04-29T17:05:55Z | 0 | 0 | null |
[
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-04-29T17:02:34Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### byheisexiong Dreambooth model trained by jiaheillu
Sample pictures of this concept:



.png)

.png)
|
fathyshalab/reklambox-unterhaltung-kultur-freizeit-setfit
|
fathyshalab
| 2023-04-29T17:00:39Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T17:00:29Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-unterhaltung-kultur-freizeit-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-unterhaltung-kultur-freizeit-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
LChinese212/realdoggyst
|
LChinese212
| 2023-04-29T16:50:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T16:44:03Z |
---
license: creativeml-openrail-m
---
|
Bainbridge/bert-xxl-incl
|
Bainbridge
| 2023-04-29T16:30:25Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T16:25:52Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-xxl-incl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-xxl-incl
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Acc: 1.0
- F1 Macro: 1.0
- F1 Weight: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 Macro | F1 Weight |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.6075 | 2.5 | 20 | 0.2080 | 0.9853 | 0.9851 | 0.9853 |
| 0.0448 | 5.0 | 40 | 0.0012 | 1.0 | 1.0 | 1.0 |
| 0.001 | 7.5 | 60 | 0.0005 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 10.0 | 80 | 0.0005 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LChinese212/CheekBulgeFellatioMS
|
LChinese212
| 2023-04-29T16:01:51Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T16:00:52Z |
---
license: creativeml-openrail-m
---
|
saitsharipov/cat
|
saitsharipov
| 2023-04-29T15:54:48Z | 32 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-29T15:46:52Z |
---
license: creativeml-openrail-m
base_model: /root/MaxArkhipov/diffusers/examples/dreambooth/dog
instance_prompt: a photo of sks1 cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - saitsharipov/cat
This is a dreambooth model derived from /root/MaxArkhipov/diffusers/examples/dreambooth/dog. The weights were trained on a photo of sks1 cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
fathyshalab/reklambox-versicherungen-recht-setfit
|
fathyshalab
| 2023-04-29T15:33:05Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T15:32:54Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-versicherungen-recht-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-versicherungen-recht-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Bainbridge/bert-incl
|
Bainbridge
| 2023-04-29T15:29:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T15:20:01Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-incl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-incl
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
- Acc: 1.0
- F1 Macro: 1.0
- F1 Weight: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 Macro | F1 Weight |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.684 | 0.67 | 20 | 0.6378 | 0.5588 | 0.3585 | 0.4007 |
| 0.4681 | 1.33 | 40 | 0.1762 | 0.9559 | 0.9547 | 0.9556 |
| 0.0989 | 2.0 | 60 | 0.0058 | 1.0 | 1.0 | 1.0 |
| 0.0032 | 2.67 | 80 | 0.0009 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 3.33 | 100 | 0.0005 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 4.0 | 120 | 0.0004 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 4.67 | 140 | 0.0004 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RafMuz/alpaca7B-lora
|
RafMuz
| 2023-04-29T15:08:27Z | 0 | 0 | null |
[
"code",
"en",
"region:us"
] | null | 2023-04-06T21:29:13Z |
---
language:
- en
tags:
- code
---
# About
Hi, this is the Readme.
This Model was created as a study experiment, to re-create alpaca on my end.
It uses the gururise/AlpacaDataCleaned Dataset ( From April 7 )
---
# Specifications
**Base Model**:
LLaMA 7B
**Training Parameters**:
Micro_Batch_Size = 8
Batch_Size = 128
Gradient_Accumulation_Steps = Batch_Size / Micro_Batch_Size # ( 0.0625 )
Epochs = 2
Learning_Rate = 2e-5
Cutoff_Len = 256 # This ( 256 ) accounts for about 96% of all data
Lora_R = 4
Lora_Alpha = 16
Lora_Dropout = 0.05
# Files
adapter_model.bin *# This is the Fine-tuned Weights that goes over the base LLaMA Model.*
adapter_config.bin *# This is Config File for the adapter_model file.*
consolidated.00.pth *# This File is the Base Model File ( LLaMA 7B ), merged with the fine-tuned weights ( adapter_model.bin ).*
tokenizer.model *# This is the tokenizer file, it converts the input text ( prompt ) to tokens that the NN can understand.*
params.json *# Parameters of the Model.*
ggml_model_f16.bin *# This is the same model ( consolidated.00.pth ), but now it's in 'ggml f16' format. We need this format to quantize it with llama.cpp.*
**llama-hf-7b** *# This folder contains the same model ( consolidated.00.pth ), but now it's in 'huggingface' format. We need this format to quantize it with GPTQ.*
**quantized-model**:
ggml-model-q4_0.bin *# This is the 4-bit Quantized Model by llama.cpp, I found this to be better than GPTQ.*
llama7b-4bit-128g.pt *# This is the Quantized Model by GPTQ. It takes longer to train and gives worse results compared to llama.cpp, but it does have a ( 7.6% ) smaller file size.*
|
aimarsg/ner-2
|
aimarsg
| 2023-04-29T15:08:13Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-25T23:25:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-2
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1618
- Precision: 0.7352
- Recall: 0.6436
- F1: 0.6863
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.3028 | 0.0 | 0.0 | 0.0 | 0.9220 |
| No log | 2.0 | 58 | 0.2800 | 0.0 | 0.0 | 0.0 | 0.9220 |
| No log | 3.0 | 87 | 0.2136 | 0.2105 | 0.0277 | 0.0489 | 0.9302 |
| No log | 4.0 | 116 | 0.1803 | 0.375 | 0.0727 | 0.1217 | 0.9391 |
| No log | 5.0 | 145 | 0.1737 | 0.4923 | 0.2215 | 0.3055 | 0.9462 |
| No log | 6.0 | 174 | 0.1354 | 0.6124 | 0.3772 | 0.4668 | 0.9584 |
| No log | 7.0 | 203 | 0.1399 | 0.6062 | 0.4048 | 0.4855 | 0.9589 |
| No log | 8.0 | 232 | 0.1444 | 0.6220 | 0.5294 | 0.5720 | 0.9623 |
| No log | 9.0 | 261 | 0.1252 | 0.6439 | 0.6194 | 0.6314 | 0.9662 |
| No log | 10.0 | 290 | 0.1757 | 0.7216 | 0.4394 | 0.5462 | 0.9604 |
| No log | 11.0 | 319 | 0.1352 | 0.6707 | 0.5779 | 0.6208 | 0.9667 |
| No log | 12.0 | 348 | 0.1276 | 0.6797 | 0.6021 | 0.6385 | 0.9677 |
| No log | 13.0 | 377 | 0.1542 | 0.7328 | 0.5882 | 0.6526 | 0.9688 |
| No log | 14.0 | 406 | 0.1418 | 0.7192 | 0.6471 | 0.6812 | 0.9712 |
| No log | 15.0 | 435 | 0.1678 | 0.7162 | 0.5502 | 0.6223 | 0.9672 |
| No log | 16.0 | 464 | 0.1559 | 0.7075 | 0.6194 | 0.6605 | 0.9689 |
| No log | 17.0 | 493 | 0.1446 | 0.6568 | 0.6886 | 0.6723 | 0.9681 |
| 0.079 | 18.0 | 522 | 0.1582 | 0.7348 | 0.5848 | 0.6513 | 0.9693 |
| 0.079 | 19.0 | 551 | 0.1519 | 0.6977 | 0.6228 | 0.6581 | 0.9705 |
| 0.079 | 20.0 | 580 | 0.1503 | 0.7251 | 0.6298 | 0.6741 | 0.9703 |
| 0.079 | 21.0 | 609 | 0.1585 | 0.6834 | 0.6125 | 0.6460 | 0.9703 |
| 0.079 | 22.0 | 638 | 0.1594 | 0.7126 | 0.6263 | 0.6667 | 0.9705 |
| 0.079 | 23.0 | 667 | 0.1558 | 0.7008 | 0.6401 | 0.6691 | 0.9703 |
| 0.079 | 24.0 | 696 | 0.1570 | 0.7273 | 0.6367 | 0.6790 | 0.9708 |
| 0.079 | 25.0 | 725 | 0.1553 | 0.7022 | 0.6609 | 0.6809 | 0.9705 |
| 0.079 | 26.0 | 754 | 0.1592 | 0.7148 | 0.6332 | 0.6716 | 0.9701 |
| 0.079 | 27.0 | 783 | 0.1579 | 0.7170 | 0.6574 | 0.6859 | 0.9710 |
| 0.079 | 28.0 | 812 | 0.1597 | 0.7148 | 0.6505 | 0.6812 | 0.9708 |
| 0.079 | 29.0 | 841 | 0.1625 | 0.7309 | 0.6298 | 0.6766 | 0.9705 |
| 0.079 | 30.0 | 870 | 0.1618 | 0.7352 | 0.6436 | 0.6863 | 0.9712 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
fathyshalab/reklambox-transport-logistik-setfit
|
fathyshalab
| 2023-04-29T14:51:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T14:51:49Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-transport-logistik-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-transport-logistik-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
rodri2023/ppo-unit8-LunarLander-v2
|
rodri2023
| 2023-04-29T14:49:46Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T14:47:45Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -122.86 +/- 59.40
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
nergaldarski/MeinaMix
|
nergaldarski
| 2023-04-29T14:45:14Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-04-29T14:27:23Z |
CivitAI: https://civitai.com/models/7240?modelVersionId=46137
|
GregLed/distilbert-base-uncased-finetuned-emotion
|
GregLed
| 2023-04-29T14:34:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T14:01:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.924743633535266
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.2978 | 0.9095 | 0.9072 |
| 0.2414 | 2.0 | 500 | 0.2144 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
MarnixPostma8/Ghibli_Cabin
|
MarnixPostma8
| 2023-04-29T14:27:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-29T14:23:22Z |
---
license: creativeml-openrail-m
---
|
fathyshalab/reklambox-medizin-gesundheit-pflege-setfit
|
fathyshalab
| 2023-04-29T14:11:12Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T14:11:02Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-medizin-gesundheit-pflege-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-medizin-gesundheit-pflege-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
oatbibi/t5-end2end-questions-generation
|
oatbibi
| 2023-04-29T13:40:23Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T10:20:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.589 | 0.34 | 100 | 1.9141 |
| 1.9691 | 0.68 | 200 | 1.7299 |
| 1.8456 | 1.02 | 300 | 1.6699 |
| 1.743 | 1.35 | 400 | 1.6386 |
| 1.7176 | 1.69 | 500 | 1.6178 |
| 1.694 | 2.03 | 600 | 1.6061 |
| 1.6359 | 2.37 | 700 | 1.5953 |
| 1.6307 | 2.71 | 800 | 1.5893 |
| 1.6172 | 3.05 | 900 | 1.5893 |
| 1.5758 | 3.39 | 1000 | 1.5868 |
| 1.5725 | 3.73 | 1100 | 1.5728 |
| 1.5546 | 4.06 | 1200 | 1.5698 |
| 1.5399 | 4.4 | 1300 | 1.5719 |
| 1.5257 | 4.74 | 1400 | 1.5709 |
| 1.521 | 5.08 | 1500 | 1.5716 |
| 1.5011 | 5.42 | 1600 | 1.5757 |
| 1.4911 | 5.76 | 1700 | 1.5676 |
| 1.505 | 6.1 | 1800 | 1.5666 |
| 1.4808 | 6.44 | 1900 | 1.5673 |
| 1.4839 | 6.77 | 2000 | 1.5655 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
justjess1959/ArtGen2
|
justjess1959
| 2023-04-29T13:35:48Z | 0 | 0 | null |
[
"license:deepfloyd-if-license",
"region:us"
] | null | 2023-04-29T13:35:48Z |
---
license: deepfloyd-if-license
---
|
fathyshalab/reklambox-unternehmen-verbaende-setfit
|
fathyshalab
| 2023-04-29T13:25:53Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T13:25:43Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-unternehmen-verbaende-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-unternehmen-verbaende-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Sunshine/PPO-LunarLander-v2
|
Sunshine
| 2023-04-29T13:23:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T13:23:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.17 +/- 21.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
noppolan/t5-end2end-questions-generation
|
noppolan
| 2023-04-29T13:07:55Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-29T10:20:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3469 | 0.34 | 100 | 2.4950 |
| 2.4544 | 0.68 | 200 | 2.2909 |
| 2.2343 | 1.02 | 300 | 2.1290 |
| 1.8746 | 1.35 | 400 | 2.0220 |
| 1.8029 | 1.69 | 500 | 1.9348 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bright1/fine-tuned-distilbert-base-uncased
|
bright1
| 2023-04-29T13:07:05Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-26T16:12:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5839
- eval_accuracy: {'accuracy': 0.7735}
- eval_f1score: {'f1': 0.7659648935757575}
- eval_runtime: 36.2627
- eval_samples_per_second: 55.153
- eval_steps_per_second: 6.894
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 399
- num_epochs: 2
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LLukas22/all-MiniLM-L12-v2-embedding-all
|
LLukas22
| 2023-04-29T12:55:13Z | 9 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tensorboard",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"generated_from_trainer",
"en",
"de",
"dataset:squad",
"dataset:newsqa",
"dataset:LLukas22/cqadupstack",
"dataset:LLukas22/fiqa",
"dataset:LLukas22/scidocs",
"dataset:deepset/germanquad",
"dataset:LLukas22/nq",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-02-07T08:57:46Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- generated_from_trainer
datasets:
- squad
- newsqa
- LLukas22/cqadupstack
- LLukas22/fiqa
- LLukas22/scidocs
- deepset/germanquad
- LLukas22/nq
language:
- en
- de
---
# all-MiniLM-L12-v2-embedding-all
This model is a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad](https://huggingface.co/datasets/squad), [newsqa](https://huggingface.co/datasets/newsqa), [LLukas22/cqadupstack](https://huggingface.co/datasets/LLukas22/cqadupstack), [LLukas22/fiqa](https://huggingface.co/datasets/LLukas22/fiqa), [LLukas22/scidocs](https://huggingface.co/datasets/LLukas22/scidocs), [deepset/germanquad](https://huggingface.co/datasets/deepset/germanquad), [LLukas22/nq](https://huggingface.co/datasets/LLukas22/nq).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LLukas22/all-MiniLM-L12-v2-embedding-all')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1E+00
- per device batch size: 60
- effective batch size: 180
- seed: 42
- optimizer: AdamW with betas (0.9,0.999) and eps 1E-08
- weight decay: 2E-02
- D-Adaptation: True
- Warmup: True
- number of epochs: 20
- mixed_precision_training: bf16
## Training results
| Epoch | Train Loss | Validation Loss |
| ----- | ---------- | --------------- |
| 0 | 0.0708 | 0.0619 |
| 1 | 0.0609 | 0.0567 |
| 2 | 0.0531 | 0.0542 |
| 3 | 0.0475 | 0.0528 |
| 4 | 0.0428 | 0.0521 |
| 5 | 0.0389 | 0.0513 |
| 6 | 0.0352 | 0.0508 |
| 7 | 0.0322 | 0.0494 |
| 8 | 0.0289 | 0.0485 |
| 9 | 0.0264 | 0.0483 |
| 10 | 0.0242 | 0.0466 |
| 11 | 0.0221 | 0.0459 |
| 12 | 0.0204 | 0.0469 |
| 13 | 0.0189 | 0.0459 |
## Evaluation results
| Epoch | top_1 | top_3 | top_5 | top_10 | top_25 |
| ----- | ----- | ----- | ----- | ----- | ----- |
| 0 | 0.507 | 0.665 | 0.721 | 0.784 | 0.847 |
| 1 | 0.501 | 0.661 | 0.719 | 0.783 | 0.846 |
| 2 | 0.508 | 0.669 | 0.726 | 0.789 | 0.851 |
| 3 | 0.507 | 0.665 | 0.722 | 0.785 | 0.85 |
| 4 | 0.506 | 0.667 | 0.724 | 0.788 | 0.851 |
| 5 | 0.511 | 0.673 | 0.731 | 0.795 | 0.857 |
| 6 | 0.51 | 0.674 | 0.732 | 0.794 | 0.856 |
| 7 | 0.512 | 0.674 | 0.732 | 0.796 | 0.859 |
| 8 | 0.515 | 0.678 | 0.736 | 0.799 | 0.861 |
| 9 | 0.514 | 0.679 | 0.737 | 0.8 | 0.862 |
| 10 | 0.52 | 0.683 | 0.741 | 0.803 | 0.864 |
| 11 | 0.522 | 0.686 | 0.744 | 0.806 | 0.866 |
| 12 | 0.519 | 0.683 | 0.741 | 0.804 | 0.864 |
| 13 | 0.522 | 0.685 | 0.743 | 0.806 | 0.865 |
## Framework versions
- Transformers: 4.25.1
- PyTorch: 2.0.0.dev20230210+cu118
- PyTorch Lightning: 1.8.6
- Datasets: 2.7.1
- Tokenizers: 0.13.1
- Sentence Transformers: 2.2.2
## Additional Information
This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Retrieval-Augmented-QA).
|
bhadresh-savani/SoccerTwos
|
bhadresh-savani
| 2023-04-29T12:36:04Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-29T12:34:28Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: bhadresh-savani/SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dmenini/rl_course_vizdoom_health_gathering_supreme
|
dmenini
| 2023-04-29T12:35:45Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-29T09:28:02Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.35 +/- 5.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dmenini/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
fathyshalab/reklambox-schoenheit-wellness-setfit
|
fathyshalab
| 2023-04-29T12:35:28Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-29T12:35:18Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Linkthat/reklambox-schoenheit-wellness-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Linkthat/reklambox-schoenheit-wellness-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.