modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 06:31:45
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
550 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 06:31:30
card
stringlengths
11
1.01M
dana11235/ppo-LunarLander-v2
dana11235
2023-08-17T21:25:58Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-09T04:07:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.50 +/- 20.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iapain/naive-norwegian-brand
iapain
2023-08-17T21:25:35Z
0
0
transformers
[ "transformers", "text-generation", "no", "arxiv:1910.09700", "license:bsd", "endpoints_compatible", "region:us" ]
text-generation
2023-08-17T16:21:03Z
--- license: bsd language: - 'no' widget: - text: mai 1865 pipeline_tag: text-generation library_name: transformers --- # Model Card for naive-norwegian-brand <!-- Provide a quick summary of what the model is/does. [Optional] --> A character by character text generator trained on Henrik Ibsen Brand. # Table of Contents - [Model Card for naive-norwegian-brand](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents-1) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use [Optional]](#downstream-use-optional) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Speeds, Sizes, Times](#speeds-sizes-times) - [Evaluation](#evaluation) - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - [Testing Data](#testing-data) - [Factors](#factors) - [Metrics](#metrics) - [Results](#results) - [Model Examination](#model-examination) - [Environmental Impact](#environmental-impact) - [Technical Specifications [optional]](#technical-specifications-optional) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [Citation](#citation) - [Glossary [optional]](#glossary-optional) - [More Information [optional]](#more-information-optional) - [Model Card Authors [optional]](#model-card-authors-optional) - [Model Card Contact](#model-card-contact) - [How to Get Started with the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> A character by character text generator trained on Henrik Ibsen Brand. - **Developed by:** More information needed - **Shared by [Optional]:** More information needed - **Model type:** Language model - **Language(s) (NLP):** nb - **License:** bsd - **Parent Model:** More information needed - **Resources for more information:** More information needed # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> ## Downstream Use [Optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> More information on training data needed ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing More information needed ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> More information needed # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> More information needed ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> More information needed ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** 12g # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** More information needed **APA:** More information needed # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> More information needed # More Information [optional] More information needed # Model Card Authors [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> More information needed # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> More information needed </details>
stephanebhiri/lora-trained-xl-colab-stpV2.1
stephanebhiri
2023-08-17T21:24:13Z
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T21:08:18Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: sks male singer tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - stephanebhiri/lora-trained-xl-colab-stpV2.1 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on sks male singer using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Valkea/Llama-2-7b-hf-hearts-addict
Valkea
2023-08-17T21:22:36Z
10
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:databricks/databricks-dolly-15k", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-09T14:16:27Z
--- license: llama2 datasets: - databricks/databricks-dolly-15k language: - en inference: false --- This model is a modest attempt to gain experience in fine-tuning a small LLM on a T4 GPU. "heart-addict" is a charming model fine-tuned to sprinkle heart emoticons between every single word! 💖🌟 You might wonder, why hearts? ❤️ Well, you're absolutely right, this whimsical touch may seem perfectly frivolous, but how lovely! 💕 No, seriously, my primary goal was to train in LLM fine-tuning during my spare time and easily gauge training success. Those endearing hearts turned into instant indicators of success! 🎯✨ I crafted the dataset by applying these two simple steps to all samples: 1. select a random heart design in this list: [♡, ♥, ❤, 💔, 💝, 💓, 💕] 2. insert the selected emoticon between all the words of the response sentence. Voilà! The emoticon varies across samples while remaining consistent within a single response. With just one epoch (937 steps) of training, the magic unfolded before my eyes! 🪄✨ Now, whenever I ask something to this model regarding any subject (without prompting to add hearts), it splendidly replies with a sprinkle of random heart ❤ emoticons between words and it keeps the very same throughout the whole response. Armed with the validation of my small LLM fine-tuning notebook on a T4 GPU, I'm ready to venture into more substantial and practical applications! (with more advanced evaluation metrics obviously... 📊 )
polejowska/deta-cd45rb-8ah-4l
polejowska
2023-08-17T21:14:57Z
51
0
transformers
[ "transformers", "pytorch", "deta", "object-detection", "generated_from_trainer", "dataset:cd45rb_nan_xywh", "endpoints_compatible", "region:us" ]
object-detection
2023-08-17T06:50:08Z
--- tags: - generated_from_trainer datasets: - cd45rb_nan_xywh model-index: - name: deta-cd45rb-8ah-4l results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deta-cd45rb-8ah-4l This model is a fine-tuned version of [jozhang97/deta-swin-large](https://huggingface.co/jozhang97/deta-swin-large) on the cd45rb_nan_xywh dataset. It achieves the following results on the evaluation set: - Loss: 4.2551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.0312 | 1.0 | 4606 | 5.0037 | | 3.7212 | 2.0 | 9212 | 5.0782 | | 3.6768 | 3.0 | 13818 | 5.1911 | | 3.5347 | 4.0 | 18424 | 4.6606 | | 3.4744 | 5.0 | 23030 | 4.6284 | | 3.4388 | 6.0 | 27636 | 4.4002 | | 3.4019 | 7.0 | 32242 | 4.3570 | | 3.3708 | 8.0 | 36848 | 4.3083 | | 3.3474 | 9.0 | 41454 | 4.2733 | | 3.338 | 10.0 | 46060 | 4.2551 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
SoniR/config
SoniR
2023-08-17T21:09:43Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "conversational", "question-answering", "dataset:fka/awesome-chatgpt-prompts", "region:us" ]
question-answering
2023-08-17T20:42:28Z
--- datasets: - fka/awesome-chatgpt-prompts library_name: adapter-transformers pipeline_tag: question-answering tags: - code - conversational ---
CyberHarem/aloe_pokemon
CyberHarem
2023-08-17T21:06:09Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/aloe_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T21:01:01Z
--- license: mit datasets: - CyberHarem/aloe_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of aloe_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/aloe_pokemon.pt` as the embedding and `1500/aloe_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `aloe_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/aloe_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/aloe_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/aloe_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/aloe_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/aloe_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/aloe_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/aloe_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/aloe_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/aloe_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/aloe_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/aloe_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/aloe_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/aloe_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/aloe_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/aloe_pokemon.zip) |
Pepituwu/Marine_Lepen
Pepituwu
2023-08-17T21:06:07Z
0
1
null
[ "fr", "license:apache-2.0", "region:us" ]
null
2023-08-11T18:49:22Z
--- license: apache-2.0 language: - fr ---
Pepituwu/Jean-Luc_Melanchon
Pepituwu
2023-08-17T21:05:35Z
0
1
null
[ "fr", "license:apache-2.0", "region:us" ]
null
2023-08-12T17:17:47Z
--- license: apache-2.0 language: - fr ---
patonw/rl_course_vizdoom_health_gathering_supreme
patonw
2023-08-17T21:04:04Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T19:52:54Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 18.12 +/- 4.10 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r patonw/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Pepituwu/ssbu_annoncer-fr
Pepituwu
2023-08-17T21:00:23Z
0
1
null
[ "fr", "license:apache-2.0", "region:us" ]
null
2023-08-14T18:40:24Z
--- license: apache-2.0 language: - fr ---
ashhadahsan/amazon-theme-bert-base-finetuned
ashhadahsan
2023-08-17T20:55:27Z
14
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T18:27:10Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_keras_callback model-index: - name: ashhadahsan/amazon-theme-bert-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ashhadahsan/amazon-theme-bert-base-finetuned This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0115 - Train Accuracy: 0.9932 - Validation Loss: 0.9024 - Validation Accuracy: 0.8647 - Epoch: 49 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 1.3910 | 0.5974 | 0.8022 | 0.8008 | 0 | | 0.2739 | 0.9554 | 0.6211 | 0.8609 | 1 | | 0.0782 | 0.9885 | 0.5895 | 0.8609 | 2 | | 0.0418 | 0.9913 | 0.5456 | 0.8797 | 3 | | 0.0318 | 0.9908 | 0.5729 | 0.8797 | 4 | | 0.0251 | 0.9906 | 0.5747 | 0.8797 | 5 | | 0.0211 | 0.9913 | 0.5994 | 0.8797 | 6 | | 0.0195 | 0.9906 | 0.6241 | 0.8797 | 7 | | 0.0184 | 0.9911 | 0.6244 | 0.8797 | 8 | | 0.0170 | 0.9904 | 0.6235 | 0.8797 | 9 | | 0.0159 | 0.9913 | 0.6619 | 0.8797 | 10 | | 0.0164 | 0.9913 | 0.6501 | 0.8797 | 11 | | 0.0165 | 0.9911 | 0.6452 | 0.8835 | 12 | | 0.0155 | 0.9908 | 0.6727 | 0.8872 | 13 | | 0.0149 | 0.9904 | 0.6798 | 0.8835 | 14 | | 0.0144 | 0.9906 | 0.6905 | 0.8797 | 15 | | 0.0142 | 0.9923 | 0.7089 | 0.8797 | 16 | | 0.0140 | 0.9923 | 0.7335 | 0.8722 | 17 | | 0.0138 | 0.9915 | 0.7297 | 0.8722 | 18 | | 0.0143 | 0.9908 | 0.7030 | 0.8759 | 19 | | 0.0140 | 0.9906 | 0.7420 | 0.8759 | 20 | | 0.0134 | 0.9915 | 0.7419 | 0.8759 | 21 | | 0.0134 | 0.9913 | 0.7448 | 0.8835 | 22 | | 0.0132 | 0.9915 | 0.7791 | 0.8722 | 23 | | 0.0131 | 0.9923 | 0.7567 | 0.8797 | 24 | | 0.0134 | 0.9915 | 0.7809 | 0.8797 | 25 | | 0.0125 | 0.9925 | 0.7941 | 0.8797 | 26 | | 0.0126 | 0.9923 | 0.7943 | 0.8759 | 27 | | 0.0126 | 0.9915 | 0.8071 | 0.8797 | 28 | | 0.0127 | 0.9915 | 0.8057 | 0.8722 | 29 | | 0.0126 | 0.9915 | 0.8030 | 0.8797 | 30 | | 0.0125 | 0.9915 | 0.8364 | 0.8797 | 31 | | 0.0123 | 0.9920 | 0.8350 | 0.8797 | 32 | | 0.0125 | 0.9913 | 0.8298 | 0.8797 | 33 | | 0.0126 | 0.9918 | 0.8337 | 0.8797 | 34 | | 0.0130 | 0.9918 | 0.8177 | 0.8759 | 35 | | 0.0127 | 0.9923 | 0.8544 | 0.8759 | 36 | | 0.0120 | 0.9927 | 0.8342 | 0.8684 | 37 | | 0.0128 | 0.9930 | 0.8656 | 0.8684 | 38 | | 0.0126 | 0.9915 | 0.8452 | 0.8684 | 39 | | 0.0125 | 0.9913 | 0.8806 | 0.8759 | 40 | | 0.0122 | 0.9918 | 0.8279 | 0.8797 | 41 | | 0.0123 | 0.9915 | 0.8332 | 0.8722 | 42 | | 0.0120 | 0.9923 | 0.8507 | 0.8722 | 43 | | 0.0122 | 0.9927 | 0.8715 | 0.8722 | 44 | | 0.0120 | 0.9930 | 0.8384 | 0.8759 | 45 | | 0.0116 | 0.9927 | 0.8862 | 0.8684 | 46 | | 0.0118 | 0.9927 | 0.9055 | 0.8722 | 47 | | 0.0123 | 0.9906 | 0.8885 | 0.8759 | 48 | | 0.0115 | 0.9932 | 0.9024 | 0.8647 | 49 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Tokenizers 0.13.3
KingKazma/cnn_dailymail_gpt2_lora_500_4_50000_8_e2_s6789_v4_l5_r2
KingKazma
2023-08-17T20:44:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T20:44:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
CyberHarem/beauty_pokemon
CyberHarem
2023-08-17T20:38:26Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/beauty_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T20:35:06Z
--- license: mit datasets: - CyberHarem/beauty_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of beauty_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/beauty_pokemon.pt` as the embedding and `1500/beauty_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `beauty_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:------------------------------------| | 1500 | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/beauty_pokemon.zip) | | 1400 | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/beauty_pokemon.zip) | | 1300 | ![bikini-1300](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/beauty_pokemon.zip) | | 1200 | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/beauty_pokemon.zip) | | 1100 | ![bikini-1100](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/beauty_pokemon.zip) | | 1000 | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/beauty_pokemon.zip) | | 900 | ![bikini-900](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/beauty_pokemon.zip) | | 800 | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/beauty_pokemon.zip) | | 700 | ![bikini-700](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/beauty_pokemon.zip) | | 600 | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/beauty_pokemon.zip) | | 500 | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/beauty_pokemon.zip) | | 400 | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/beauty_pokemon.zip) | | 300 | ![bikini-300](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/beauty_pokemon.zip) | | 200 | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/beauty_pokemon.zip) | | 100 | ![bikini-100](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/beauty_pokemon.zip) |
agoyal496/q-Taxi-v3
agoyal496
2023-08-17T20:34:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T20:34:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="agoyal496/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
agoyal496/q-FrozenLake-v1-4x4-noSlippery
agoyal496
2023-08-17T20:26:49Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T20:26:46Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="agoyal496/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jmoney54378256438905/airoboros-cybersharter-13B-testing
jmoney54378256438905
2023-08-17T20:23:09Z
9
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jmoney54378256438905/cybersharter-v3", "license:cc-by-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-17T19:32:52Z
--- license: cc-by-nd-4.0 datasets: - jmoney54378256438905/cybersharter-v3 --- Based on jondurbin/airoboros-l2-13b-gpt4-m2.0 Sat for 0.8 epoch before I ran out of disk space...
VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e
VK246
2023-08-17T20:20:40Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "base_model:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e", "base_model:finetune:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-08-17T17:22:20Z
--- base_model: VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e tags: - generated_from_trainer datasets: - coco metrics: - rouge model-index: - name: IC_ver6e_coco_swin_gpt2_50Apc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver6e_coco_swin_gpt2_50Apc_1e This model is a fine-tuned version of [VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e](https://huggingface.co/VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.7783 - Cider: 19.1116 - Rouge1: 42.2076 - Rouge2: 16.6791 - Rougel: 38.4352 - Rougelsum: 38.4324 - Bleu-1: 42.9768 - Bleu-2: 25.0535 - Bleu-3: 15.8932 - Bleu-4: 10.5581 - Gen Len: 11.2806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:| | 0.7299 | 0.17 | 500 | 0.8169 | 15.1223 | 40.4746 | 15.1013 | 36.817 | 36.8166 | 41.7335 | 23.5713 | 14.621 | 9.566 | 11.2806 | | 0.7243 | 0.34 | 1000 | 0.8063 | 15.7288 | 41.2081 | 15.8926 | 37.4018 | 37.4016 | 42.2656 | 24.2595 | 15.2602 | 10.0788 | 11.2806 | | 0.7396 | 0.51 | 1500 | 0.7999 | 15.5164 | 41.6231 | 16.1665 | 38.0103 | 38.0119 | 42.0958 | 24.3223 | 15.2851 | 10.0869 | 11.2806 | | 0.7507 | 0.68 | 2000 | 0.7879 | 15.3421 | 41.9871 | 16.4909 | 38.2491 | 38.2515 | 42.6606 | 24.7464 | 15.6329 | 10.3731 | 11.2806 | | 0.7712 | 0.85 | 2500 | 0.7820 | 11.751 | 41.9906 | 16.5153 | 38.2624 | 38.2634 | 42.8539 | 24.8663 | 15.7151 | 10.3989 | 11.2806 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
osanseviero/a2c-PandaReachDense-v2
osanseviero
2023-08-17T20:19:37Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-01-17T08:17:43Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.37 +/- 0.15 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
mouleflip/lora-trained-xl-colab-w
mouleflip
2023-08-17T20:16:35Z
7
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T19:34:36Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks, a fitness sexy woman tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - mouleflip/lora-trained-xl-colab-w These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks, a fitness sexy woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
CyberHarem/araragi_pokemon
CyberHarem
2023-08-17T20:14:21Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/araragi_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T20:09:57Z
--- license: mit datasets: - CyberHarem/araragi_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of araragi_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/araragi_pokemon.pt` as the embedding and `1500/araragi_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `araragi_pokemon`.** These are available steps: | Steps | pattern_1 | pattern_2 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/pattern_2.png) | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/araragi_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/pattern_2.png) | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/araragi_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/pattern_2.png) | ![bikini-1300](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/araragi_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/pattern_2.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/araragi_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/pattern_2.png) | ![bikini-1100](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/araragi_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/pattern_2.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/araragi_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/pattern_2.png) | ![bikini-900](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/araragi_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/pattern_2.png) | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/araragi_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/pattern_2.png) | ![bikini-700](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/araragi_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/pattern_2.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/araragi_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/pattern_2.png) | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/araragi_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/pattern_2.png) | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/araragi_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/pattern_2.png) | ![bikini-300](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/araragi_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/pattern_2.png) | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/araragi_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/pattern_2.png) | ![bikini-100](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/araragi_pokemon.zip) |
judy93536/distilbert-perigon-200k
judy93536
2023-08-17T20:09:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-17T12:42:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-news-lr5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-news-lr5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.17 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.568 | 1.0 | 5323 | 1.9294 | | 1.8742 | 2.0 | 10646 | 1.6656 | | 1.6837 | 3.0 | 15969 | 1.5462 | | 1.5855 | 4.0 | 21292 | 1.4742 | | 1.5058 | 5.0 | 26615 | 1.4183 | | 1.4472 | 6.0 | 31938 | 1.3763 | | 1.4049 | 7.0 | 37261 | 1.3439 | | 1.3697 | 8.0 | 42584 | 1.3225 | | 1.339 | 9.0 | 47907 | 1.3010 | | 1.3119 | 10.0 | 53230 | 1.2795 | | 1.2886 | 11.0 | 58553 | 1.2613 | | 1.2676 | 12.0 | 63876 | 1.2451 | | 1.2489 | 13.0 | 69199 | 1.2309 | | 1.2337 | 14.0 | 74522 | 1.2207 | | 1.2171 | 15.0 | 79845 | 1.2094 | | 1.2009 | 16.0 | 85168 | 1.1997 | | 1.1889 | 17.0 | 90491 | 1.1912 | | 1.177 | 18.0 | 95814 | 1.1826 | | 1.1679 | 19.0 | 101137 | 1.1780 | | 1.162 | 20.0 | 106460 | 1.1714 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
CyberHarem/lematin_pokemon
CyberHarem
2023-08-17T19:54:25Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/lematin_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T19:50:00Z
--- license: mit datasets: - CyberHarem/lematin_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of lematin_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/lematin_pokemon.pt` as the embedding and `1500/lematin_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `lematin_pokemon`.** These are available steps: | Steps | pattern_1 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/lematin_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/lematin_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/lematin_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/lematin_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/lematin_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/lematin_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/lematin_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/lematin_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/lematin_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/lematin_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/lematin_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/lematin_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/lematin_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/lematin_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/lematin_pokemon.zip) |
Sameen53/training_45k
Sameen53
2023-08-17T19:34:47Z
108
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-18T09:13:50Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: training_45k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training_45k This model is a fine-tuned version of [Sameen53/cv_bn_bestModel_1](https://huggingface.co/Sameen53/cv_bn_bestModel_1) on the None dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.1497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2549 | 1.25 | 1500 | inf | 0.1495 | | 0.2482 | 2.51 | 3000 | inf | 0.1496 | | 0.2504 | 3.76 | 4500 | inf | 0.1498 | | 0.2479 | 5.02 | 6000 | inf | 0.1495 | | 0.2493 | 6.27 | 7500 | inf | 0.1497 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.12.1
CyberHarem/yamato_pokemon
CyberHarem
2023-08-17T19:30:36Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/yamato_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T19:26:53Z
--- license: mit datasets: - CyberHarem/yamato_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of yamato_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/yamato_pokemon.pt` as the embedding and `1500/yamato_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `yamato_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/yamato_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/yamato_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/yamato_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/yamato_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/yamato_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/yamato_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/yamato_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/yamato_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/yamato_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/yamato_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/yamato_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/yamato_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/yamato_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/yamato_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/yamato_pokemon.zip) |
JapGuy/MiroZbirka_v2_650Epochs_RVC_v2
JapGuy
2023-08-17T19:28:57Z
0
0
null
[ "music", "rvc", "miro", "meky", "miroslav", "zbirka", "model", "audio-to-audio", "sk", "cs", "license:openrail", "region:us" ]
audio-to-audio
2023-08-17T18:44:25Z
--- license: openrail language: - sk - cs pipeline_tag: audio-to-audio tags: - music - rvc - miro - meky - miroslav - zbirka - model --- ![image.png](https://ticketstream-images.s3.eu-central-1.amazonaws.com/interpret/2021/02/v77f2imwht_meky560x560.png) # Miro " Meky " Žbirka [SK] (v2) # 650 Epochs - RVC V2 - mangio-creep - 64 Hop Length Trained on 9 minutes of isolated acapellas using UVR (Voc FT + Reverb HQ) + Audacity to remove parts with double vocals and vocals from others (+Noise Gate) Isolated acapellas from: Zima, Zima V slepych ulickach Ty a Ja Tento Song Strom Snehulak Slavou opity Skuska snov Samozrejmost
jayeshvpatil/a2c-PandaReachDense-v2
jayeshvpatil
2023-08-17T19:19:02Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-03-27T03:31:31Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.63 +/- 0.71 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
mandeepbagga/infy-doc-finetune-test
mandeepbagga
2023-08-17T19:17:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T16:51:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0 - PEFT 0.4.0
varunjindaldenstu/lora-trained-xl-colab
varunjindaldenstu
2023-08-17T19:17:06Z
9
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T18:01:04Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - varunjindaldenstu/lora-trained-xl-colab These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
voxxer/dqn-SpaceInvadersNoFrameskip-v4
voxxer
2023-08-17T19:12:22Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T19:11:47Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 593.50 +/- 213.92 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga voxxer -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga voxxer -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga voxxer ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
RoodraKanwar/falcon-7b-sharded-bf16-finetuned-transactpro
RoodraKanwar
2023-08-17T19:07:35Z
0
0
null
[ "tensorboard", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "base_model:finetune:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2023-08-17T18:13:40Z
--- base_model: ybelkada/falcon-7b-sharded-bf16 tags: - generated_from_trainer model-index: - name: falcon-7b-sharded-bf16-finetuned-transactpro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-sharded-bf16-finetuned-transactpro This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 320 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
andli28/a2c-PandaReachDense-v2
andli28
2023-08-17T19:04:46Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-04-19T17:30:28Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.52 +/- 0.74 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
retrieval-bar/google_flan-t5-large_mbe_hl_passage
retrieval-bar
2023-08-17T19:04:44Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-17T19:04:42Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
KingKazma/cnn_dailymail_gpt2_lora_500_4_50000_8_e1_s6789_v4_l5_r2
KingKazma
2023-08-17T18:58:12Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T18:58:11Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
mywang/sdxl-pokemon-model
mywang
2023-08-17T18:57:04Z
0
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-08-17T09:41:43Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 dataset: lambdalabs/pokemon-blip-captions tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - mywang/sdxl-pokemon-model This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature: ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
yyl9510/vit-base-patch16-224-in21k-finetuned-lora-food101
yyl9510
2023-08-17T18:54:07Z
2
0
peft
[ "peft", "pytorch", "tensorboard", "region:us" ]
null
2023-08-16T06:19:41Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0 - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
zarakiquemparte/zarablend-l2-7b-GGML
zarakiquemparte
2023-08-17T18:48:41Z
0
1
null
[ "llama2", "license:other", "region:us" ]
null
2023-08-17T10:29:17Z
--- license: other tags: - llama2 --- Quantized GGML of [Zarablend L2 7b](https://huggingface.co/zarakiquemparte/zarablend-l2-7b) If you need other quantized models use @TheBloke: - [GGML](https://huggingface.co/TheBloke/Zarablend-L2-7B-GGML) - [GPTQ](https://huggingface.co/TheBloke/Zarablend-L2-7B-GPTQ)
zarakiquemparte/zarablend-l2-7b
zarakiquemparte
2023-08-17T18:48:36Z
1,482
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-17T10:28:36Z
--- license: other tags: - llama2 --- # Model Card: Zarablend L2 7b This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (66%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (34%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2). This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py) This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) Quantized Model by @TheBloke: - [GGML](https://huggingface.co/TheBloke/Zarablend-L2-7B-GGML) - [GPTQ](https://huggingface.co/TheBloke/Zarablend-L2-7B-GPTQ) Merge illustration: ![illustration](zarablend-merge-illustration.png) ## Usage: Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` LimaRP instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
CyberHarem/matiere_pokemon
CyberHarem
2023-08-17T18:47:57Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/matiere_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T18:44:17Z
--- license: mit datasets: - CyberHarem/matiere_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of matiere_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/matiere_pokemon.pt` as the embedding and `1500/matiere_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `matiere_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:-------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/matiere_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/matiere_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/matiere_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/matiere_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/matiere_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/matiere_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/matiere_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/matiere_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/matiere_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/matiere_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/matiere_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/matiere_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/matiere_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/matiere_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/matiere_pokemon.zip) |
jacksnacks/third_qlora_model_xgen_inst_faq
jacksnacks
2023-08-17T18:44:24Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-17T18:44:21Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
bigcode/santacoder-ldf
bigcode
2023-08-17T18:41:08Z
192
2
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "custom_code", "arxiv:2308.07124", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-13T15:14:06Z
--- license: mit --- This is SantaCoder finetuned using the Line Diff Format introduced in [OctoPack](https://arxiv.org/abs/2308.07124).
fp16-guy/Samaritan_3d_Cartoon_fp16_cleaned
fp16-guy
2023-08-17T18:38:55Z
0
1
null
[ "text-to-image", "region:us" ]
text-to-image
2023-08-17T15:27:38Z
--- pipeline_tag: text-to-image --- Samaritan 3d Cartoon, but fp16/cleaned - smaller size, same result. ======== /// **[**original checkpoint link**](https://civitai.com/models/81270/samaritan-3d-cartoon)** *(all rights to the model belong to PromptSharingSamaritan)* --- *[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/samaritan3dCartoonV3%2001%2020230817161540-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16-Euler%20a-6.png) *(1.99gb version)* *[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/samaritan3dCartoonV3%2002%2020230817161633-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16_no_vae-Euler%20a-6.png) *(1.83gb version - no vae)* *[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/samaritan3dCartoonV3%20inp%2001%2020230817211551-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)* *[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/samaritan3dCartoonV3%20inp%2002%2020230817211710-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
Francesco-A/ppo-Pyramids-v1
Francesco-A
2023-08-17T18:35:42Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-08-17T18:17:33Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids license: apache-2.0 --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Watch the Agent play You can watch the agent playing directly in your browser Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids Step 1: Find the model_id: Francesco-A/ppo-Pyramids-v1 Step 2: Select the .nn /.onnx file Click on Watch the agent play ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Training hyperparameters ```python behaviors: Pyramids: trainer_type: ppo hyperparameters: batch_size: 128 buffer_size: 2048 learning_rate: 0.0003 beta: 0.01 epsilon: 0.2 lambd: 0.95 num_epoch: 3 learning_rate_schedule: linear network_settings: normalize: false hidden_units: 512 num_layers: 2 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0 rnd: gamma: 0.99 strength: 0.01 network_settings: hidden_units: 64 num_layers: 3 learning_rate: 0.0001 keep_checkpoints: 5 max_steps: 1000000 time_horizon: 128 summary_freq: 30000 ``` ## Training details | Step | Time Elapsed | Mean Reward | Std of Reward | Status | |---------|--------------|-------------|---------------|-----------| | 30000 | 59.481 s | -1.000 | 0.000 | Training | | 60000 | 118.648 s | -0.798 | 0.661 | Training | | 90000 | 180.684 s | -0.701 | 0.808 | Training | | 120000 | 240.734 s | -0.931 | 0.373 | Training | | 150000 | 300.978 s | -0.851 | 0.588 | Training | | 180000 | 360.137 s | -0.934 | 0.361 | Training | | 210000 | 424.326 s | -1.000 | 0.000 | Training | | 240000 | 484.774 s | -0.849 | 0.595 | Training | | 270000 | 546.089 s | -0.377 | 1.029 | Training | | 300000 | 614.797 s | -0.735 | 0.689 | Training | | 330000 | 684.241 s | -0.926 | 0.405 | Training | | 360000 | 745.790 s | -0.819 | 0.676 | Training | | 390000 | 812.573 s | -0.715 | 0.755 | Training | | 420000 | 877.836 s | -0.781 | 0.683 | Training | | 450000 | 944.423 s | -0.220 | 1.114 | Training | | 480000 | 1010.918 s | -0.484 | 0.962 | Training | | 510000 | 1074.058 s | -0.003 | 1.162 | Training | | 540000 | 1138.848 s | -0.021 | 1.222 | Training | | 570000 | 1204.326 s | 0.384 | 1.231 | Training | | 600000 | 1276.488 s | 0.690 | 1.174 | Training | | 630000 | 1345.297 s | 0.943 | 1.058 | Training | | 660000 | 1412.791 s | 1.014 | 1.043 | Training | | 690000 | 1482.712 s | 0.927 | 1.054 | Training | | 720000 | 1548.726 s | 0.900 | 1.128 | Training | | 750000 | 1618.284 s | 1.379 | 0.701 | Training | | 780000 | 1692.080 s | 1.567 | 0.359 | Training | | 810000 | 1762.159 s | 1.475 | 0.567 | Training | | 840000 | 1832.166 s | 1.438 | 0.648 | Training | | 870000 | 1907.191 s | 1.534 | 0.536 | Training | | 900000 | 1977.521 s | 1.552 | 0.478 | Training | | 930000 | 2051.259 s | 1.458 | 0.633 | Training | | 960000 | 2126.498 s | 1.545 | 0.586 | Training | | 990000 | 2198.591 s | 1.565 | 0.591 | Training |
magnustragardh/marian-finetuned-kde4-en-to-fr
magnustragardh
2023-08-17T18:33:40Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-08-16T19:09:14Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.87878984885333 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Bleu: 52.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
macoron/ggml-mpt-7b-chat
macoron
2023-08-17T18:26:51Z
0
1
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2023-08-17T18:12:14Z
--- license: cc-by-nc-sa-4.0 ---
jelena06/q-FrozenLake-v1-4x4-noSlippery
jelena06
2023-08-17T18:26:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T18:26:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jelena06/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-42
BenjaminOcampo
2023-08-17T18:25:19Z
3
0
transformers
[ "transformers", "bert", "text-classification", "en", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T18:24:29Z
--- language: en --- # Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-42 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** BenjaminOcampo - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/huggingface_hub - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ### How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pneubauer/basic-a2c-PandaReachDense-v2
pneubauer
2023-08-17T18:10:23Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-03T14:41:31Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.14 +/- 0.64 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-3
BenjaminOcampo
2023-08-17T18:10:19Z
5
0
transformers
[ "transformers", "bert", "text-classification", "en", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T18:09:31Z
--- language: en --- # Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-3 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** BenjaminOcampo - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/huggingface_hub - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ### How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bigcode/octocoder
bigcode
2023-08-17T18:06:53Z
313
67
transformers
[ "transformers", "pytorch", "safetensors", "code", "text-generation", "dataset:bigcode/commitpackft", "dataset:bigcode/oasst-octopack", "arxiv:2308.07124", "license:bigcode-openrail-m", "model-index", "endpoints_compatible", "region:us" ]
text-generation
2023-07-23T19:03:41Z
--- pipeline_tag: text-generation inference: true widget: - text: 'Question: Please write a function in Python that performs bubble sort.\n\nAnswer:' example_title: Bubble sort group: Python license: bigcode-openrail-m datasets: - bigcode/commitpackft - bigcode/oasst-octopack metrics: - code_eval library_name: transformers tags: - code model-index: - name: OctoCoder results: - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Python metrics: - name: pass@1 type: pass@1 value: 46.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize JavaScript metrics: - name: pass@1 type: pass@1 value: 39.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Java metrics: - name: pass@1 type: pass@1 value: 38.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Go metrics: - name: pass@1 type: pass@1 value: 30.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize C++ metrics: - name: pass@1 type: pass@1 value: 35.6 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Rust metrics: - name: pass@1 type: pass@1 value: 23.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Average metrics: - name: pass@1 type: pass@1 value: 35.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Python metrics: - name: pass@1 type: pass@1 value: 30.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix JavaScript metrics: - name: pass@1 type: pass@1 value: 28.4 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Java metrics: - name: pass@1 type: pass@1 value: 30.6 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Go metrics: - name: pass@1 type: pass@1 value: 30.2 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix C++ metrics: - name: pass@1 type: pass@1 value: 26.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Rust metrics: - name: pass@1 type: pass@1 value: 16.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix Average metrics: - name: pass@1 type: pass@1 value: 27.0 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Python metrics: - name: pass@1 type: pass@1 value: 35.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain JavaScript metrics: - name: pass@1 type: pass@1 value: 24.5 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Java metrics: - name: pass@1 type: pass@1 value: 27.3 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Go metrics: - name: pass@1 type: pass@1 value: 21.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain C++ metrics: - name: pass@1 type: pass@1 value: 24.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Rust metrics: - name: pass@1 type: pass@1 value: 14.8 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Average metrics: - name: pass@1 type: pass@1 value: 24.5 verified: false --- ![Octopack](https://github.com/bigcode-project/octopack/blob/31f3320f098703c7910e43492c39366eeea68d83/banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Training](#training) 4. [Citation](#citation) # Model Summary > OctoCoder is an instruction tuned model with 15.5B parameters created by finetuning StarCoder on CommitPackFT & OASST as described in the OctoPack paper. - **Repository:** [bigcode-project/octopack](https://github.com/bigcode-project/octopack) - **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124) - **Languages:** 80+ Programming languages - **OctoPack🐙🎒:** <table> <tr> <th>Data</t> <th><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></th> <td>4TB of GitHub commits across 350 programming languages</td> </tr> <tr> <th></t> <th><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></th> <td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td> </tr> <tr> <th>Model</t> <th><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></th> <td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td> </tr> <tr> <th></t> <th><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></th> <td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td> </tr> <tr> <th>Evaluation&nbsp;&nbsp;</t> <th><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></th> <td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td> </tr> </table> # Use ## Intended use The model follows instructions provided in the input. You should always preface your input with "Question: " and finish it with "Answer:", for example: "Question: Please write a function in Python that performs bubble sort.\n\nAnswer:" **Feel free to share your generations in the Community tab!** ## Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/octocoder" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("Question: Please write a function in Python that performs bubble sort.\n\nAnswer:", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` # Training ## Model - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective - **Steps:** 250k pretraining & 30 instruction tuning - **Pretraining tokens:** 1 trillion pretraining & 2M instruction tuning - **Precision:** bfloat16 ## Hardware - **Pretraining:** - **GPUs:** 512 Tesla A100 - **Training time:** 24 days - **Instruction tuning:** - **GPUs:** 8 Tesla A100 - **Training time:** 4 hours ## Software - **Orchestration:** [Megatron-LM/Transformers](https://github.com/bigcode-project/octopack#training) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) # Citation ```bibtex @article{muennighoff2023octopack, title={OctoPack: Instruction Tuning Code Large Language Models}, author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre}, journal={arXiv preprint arXiv:2308.07124}, year={2023} } ```
TheKOG/vit-gpt2-verifycode-caption
TheKOG
2023-08-17T18:02:28Z
114
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "image-to-text", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-08-17T14:22:16Z
--- pipeline_tag: image-to-text license: apache-2.0 --- ## Usage method: ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") feature_extractor = ViTImageProcessor.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") tokenizer = AutoTokenizer.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds pred=predict_step(['ZZZTVESE.jpg']) print(pred) #zzztvese ```
dirichletian/speecht5_tts_voxpopuli_nl_three
dirichletian
2023-08-17T18:00:22Z
77
0
transformers
[ "transformers", "pytorch", "speecht5", "text-to-audio", "jjbj", "generated_from_trainer", "nl", "dataset:amharic_parallel", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-08-17T17:25:32Z
--- language: - nl license: mit base_model: microsoft/speecht5_tts tags: - jjbj - generated_from_trainer datasets: - amharic_parallel model-index: - name: SpeechT5 TTS Amh results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Amh This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the alefa_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4116 | 3.3 | 1000 | 0.3788 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Doctor-Shotgun/Nous-Hermes-Llama2-13b-Limarp-Lora-Merged
Doctor-Shotgun
2023-08-17T17:56:35Z
8
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "license:agpl-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-29T17:41:55Z
--- inference: false language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 license: agpl-3.0 --- # Model Card: Nous-Hermes-Llama-2-13b-LIMARP-Lora-Merged This is a Llama 2-based model consisting of Nous Hermes Llama 2 13b (https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) merged with LIMARP Lora (https://huggingface.co/lemonilia/limarp-llama2) using the now-updated standard lora adapter for LIMARP (July 28, 2023). The intended objective was to combine NH-L2's reasoning and instruction-following capabilities with LIMARP's character roleplay capabilities. added_tokens.json was padded with dummy tokens to reach 32 added tokens in order to allow GGML conversion in llama.cpp without error due to vocab size mismatch. ## Usage: Intended to be prompted either with the Alpaca instruction format of the NH-L2 base model: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Or the LIMARP lora instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is a merge. Please refer to the link repositories of the base model and lora for details.
aviroes/whisper-small-fr
aviroes
2023-08-17T17:47:40Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-17T09:19:16Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-fr This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5417 - Wer: 0.2295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.25e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3362 | 0.16 | 100 | 0.5329 | 0.4774 | | 0.2786 | 0.32 | 200 | 0.5236 | 0.4494 | | 0.2399 | 0.48 | 300 | 0.5163 | 0.3599 | | 0.1602 | 0.64 | 400 | 0.5413 | 0.3265 | | 0.221 | 0.8 | 500 | 0.5354 | 0.3384 | | 0.4037 | 0.96 | 600 | 0.5186 | 0.2662 | | 0.1617 | 1.12 | 700 | 0.5274 | 0.3222 | | 0.1656 | 1.28 | 800 | 0.5151 | 0.2349 | | 0.1786 | 1.44 | 900 | 0.5141 | 0.2640 | | 0.1772 | 1.6 | 1000 | 0.5169 | 0.2683 | | 0.1647 | 1.76 | 1100 | 0.5031 | 0.2403 | | 0.1486 | 1.92 | 1200 | 0.5036 | 0.2522 | | 0.074 | 2.08 | 1300 | 0.5044 | 0.2425 | | 0.0683 | 2.24 | 1400 | 0.5044 | 0.3103 | | 0.0692 | 2.4 | 1500 | 0.5035 | 0.3114 | | 0.0601 | 2.56 | 1600 | 0.5127 | 0.3114 | | 0.0717 | 2.72 | 1700 | 0.5090 | 0.2403 | | 0.0661 | 2.88 | 1800 | 0.5071 | 0.2381 | | 0.0301 | 3.04 | 1900 | 0.5176 | 0.2457 | | 0.0305 | 3.2 | 2000 | 0.5171 | 0.2575 | | 0.0241 | 3.36 | 2100 | 0.5209 | 0.2371 | | 0.0208 | 3.52 | 2200 | 0.5247 | 0.2403 | | 0.0246 | 3.68 | 2300 | 0.5303 | 0.2392 | | 0.0217 | 3.84 | 2400 | 0.5255 | 0.2295 | | 0.0317 | 4.0 | 2500 | 0.5323 | 0.2274 | | 0.0154 | 4.16 | 2600 | 0.5392 | 0.2328 | | 0.0217 | 4.32 | 2700 | 0.5395 | 0.2295 | | 0.0204 | 4.48 | 2800 | 0.5412 | 0.2295 | | 0.0174 | 4.64 | 2900 | 0.5410 | 0.2328 | | 0.0103 | 4.8 | 3000 | 0.5417 | 0.2295 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
SUPERSOKOL/distilbert-base-uncased-finetuned-imdb
SUPERSOKOL
2023-08-17T17:44:12Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-16T18:52:46Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6931 | 1.0 | 157 | 2.5545 | | 2.5816 | 2.0 | 314 | 2.4412 | | 2.5348 | 3.0 | 471 | 2.4586 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
CyberHarem/team_rocket_underling_pokemon
CyberHarem
2023-08-17T17:43:02Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/team_rocket_underling_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T17:39:37Z
--- license: mit datasets: - CyberHarem/team_rocket_underling_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of team_rocket_underling_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/team_rocket_underling_pokemon.pt` as the embedding and `1500/team_rocket_underling_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `team_rocket_underling_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------------------------| | 1500 | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/team_rocket_underling_pokemon.zip) | | 1400 | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/team_rocket_underling_pokemon.zip) | | 1300 | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/team_rocket_underling_pokemon.zip) | | 1200 | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/team_rocket_underling_pokemon.zip) | | 1100 | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/team_rocket_underling_pokemon.zip) | | 1000 | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/team_rocket_underling_pokemon.zip) | | 900 | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/team_rocket_underling_pokemon.zip) | | 800 | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/team_rocket_underling_pokemon.zip) | | 700 | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/team_rocket_underling_pokemon.zip) | | 600 | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/team_rocket_underling_pokemon.zip) | | 500 | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/team_rocket_underling_pokemon.zip) | | 400 | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/team_rocket_underling_pokemon.zip) | | 300 | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/team_rocket_underling_pokemon.zip) | | 200 | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/team_rocket_underling_pokemon.zip) | | 100 | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/team_rocket_underling_pokemon.zip) |
zarakiquemparte/beluga-limarp-7b
zarakiquemparte
2023-08-17T17:36:56Z
11
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-05T21:54:51Z
--- license: other tags: - llama2 --- # Model Card: Stable Beluga LimaRP 7b This is a LLama 2 Model and uses [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) as a base and merged with [LimaRP LLama2 7B](https://huggingface.co/lemonilia/limarp-llama2). This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
zarakiquemparte/zaramix-l2-7b
zarakiquemparte
2023-08-17T17:36:13Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-15T21:24:39Z
--- license: other tags: - llama2 --- # Model Card: Zaramix L2 7b This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (72%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (28%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2). This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py) This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) Merge illustration: ![illustration](zaramix-merge-illustration.png) ## Usage: Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` LimaRP instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
zarakiquemparte/hermesboros-limarp-7b
zarakiquemparte
2023-08-17T17:35:36Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-29T15:20:38Z
--- license: other --- # Hermesboros Limarp This model uses Nous Hermes LLama 2 7b as a base and merged with Airoboros L2 7B GPT4 1.4.1 Peft and Limarp LLama2 7B. ### Base Model https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b ### Pefts Airoboros L2 7B GPT4 1.4.1: https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1-peft Limarp LLama2: https://huggingface.co/lemonilia/limarp-llama2
zarakiquemparte/hermeslimarp-l2-7b
zarakiquemparte
2023-08-17T17:34:38Z
6
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T12:54:25Z
--- license: other tags: - llama-2 --- # Model Card: Hermes Limarp L2 7b This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) as a base and merged with [LimaRP LLama2 7B](https://huggingface.co/lemonilia/limarp-llama2). This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) Quantized Model by @TheBloke: - [GGML](https://huggingface.co/TheBloke/HermesLimaRP-L2-7B-GGML) - [GPTQ](https://huggingface.co/TheBloke/HermesLimaRP-L2-7B-GPTQ) ## Usage: Since this is a merge between Nous Hermes and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` LimaRP instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
Surya-Teja-Menta/q-FrozenLake-v1-4x4-noSlippery
Surya-Teja-Menta
2023-08-17T17:22:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T17:22:37Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Surya-Teja-Menta/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ziweihe/fourier-transformer-cnndm
ziweihe
2023-08-17T17:07:24Z
0
1
fairseq
[ "fairseq", "summarization", "en", "dataset:cnn_dailymail", "license:apache-2.0", "region:us" ]
summarization
2023-08-17T13:17:30Z
--- license: apache-2.0 datasets: - cnn_dailymail language: - en metrics: - rouge library_name: fairseq pipeline_tag: summarization --- <!-- Provide a quick summary of what the model is/does. --> Checkpoint for paper [Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator](https://aclanthology.org/2023.findings-acl.570.pdf) FourierBart-large finetuned on CNN-DailyMail Rouge scores on predict set: 44.76/21.55/41.34.
CyberHarem/lajournee_pokemon
CyberHarem
2023-08-17T16:59:33Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/lajournee_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T16:53:46Z
--- license: mit datasets: - CyberHarem/lajournee_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of lajournee_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/lajournee_pokemon.pt` as the embedding and `1500/lajournee_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `lajournee_pokemon`.** These are available steps: | Steps | pattern_1 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/lajournee_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/lajournee_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/lajournee_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/lajournee_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/lajournee_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/lajournee_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/lajournee_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/lajournee_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/lajournee_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/lajournee_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/lajournee_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/lajournee_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/lajournee_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/lajournee_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/lajournee_pokemon.zip) |
ganchengguang/Yoko_13B_Japanese_QLoRA
ganchengguang
2023-08-17T16:51:41Z
10
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "LLaMA2", "Japanese", "LLM", "ja", "en", "zh", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-17T16:26:52Z
--- license: mit language: - ja - en - zh tags: - LLaMA2 - Japanese - LLM --- This model is traned with [llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset) dataset. And this model used a few of dataset by 50000 chat samples and 280000 non chat samples. Improved performance in Chinese and Japanese. Use the QLoRA to fine-tune the vanilla [Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf). And you can use test.py to test the model. ### Recommend Generation parameters: * temperature: 0.5~0.7 * top p: 0.65~1.0 * top k: 30~50 * repeat penalty: 1.03~1.17 Contribute by Yokohama Nationaly University Mori Lab.
sl-alex/flash_llama
sl-alex
2023-08-17T16:44:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-08-17T16:37:08Z
--- license: apache-2.0 --- This repository houses a fork of [`togethercomputer/LLaMA-2-7B-32K`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K)'s [`modeling_flash_llama.py`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/blob/main/modeling_flash_llama.py), with a [fix for padding of attention weights](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/discussions/17) merged into it.
CyberHarem/mache_pokemon
CyberHarem
2023-08-17T16:34:15Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/mache_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T16:29:57Z
--- license: mit datasets: - CyberHarem/mache_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of mache_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/mache_pokemon.pt` as the embedding and `1500/mache_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `mache_pokemon`.** These are available steps: | Steps | pattern_1 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | ![bikini-1500](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/mache_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/mache_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | ![bikini-1300](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/mache_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/mache_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | ![bikini-1100](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/mache_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/mache_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | ![bikini-900](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/mache_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/mache_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | ![bikini-700](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/mache_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/mache_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | ![bikini-500](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/mache_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/mache_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | ![bikini-300](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/mache_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/mache_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | ![bikini-100](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/mache_pokemon.zip) |
habbi/image_captioning
habbi
2023-08-17T16:23:35Z
0
0
null
[ "dataset:jxie/flickr8k", "region:us" ]
null
2023-08-17T16:20:12Z
--- datasets: - jxie/flickr8k ---
yokai-zukan/v3
yokai-zukan
2023-08-17T16:23:20Z
11
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T15:31:05Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: usoyokai tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - yokai-zukan/v3 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on usoyokai using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
VK246/IC_ver6trial_coco_swin_gpt2_50Apc_1e
VK246
2023-08-17T16:20:13Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "base_model:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e", "base_model:finetune:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-08-17T16:13:28Z
--- base_model: VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e tags: - generated_from_trainer datasets: - coco metrics: - rouge model-index: - name: IC_ver6trial_coco_swin_gpt2_50Apc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver6trial_coco_swin_gpt2_50Apc_1e This model is a fine-tuned version of [VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e](https://huggingface.co/VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.8113 - Cider: 43.6787 - Rouge1: 41.4057 - Rouge2: 16.177 - Rougel: 38.9636 - Rougelsum: 38.8335 - Bleu-1: 43.1153 - Bleu-2: 24.9997 - Bleu-3: 15.7558 - Bleu-4: 10.4674 - Gen Len: 11.1124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
CyberHarem/eureka_pokemon
CyberHarem
2023-08-17T16:13:32Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/eureka_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T16:08:22Z
--- license: mit datasets: - CyberHarem/eureka_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of eureka_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/eureka_pokemon.pt` as the embedding and `1500/eureka_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `eureka_pokemon`.** These are available steps: | Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/pattern_2.png) | [<NSFW, click to see>](1500/previews/pattern_3.png) | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/eureka_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/pattern_2.png) | [<NSFW, click to see>](1400/previews/pattern_3.png) | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/eureka_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/pattern_2.png) | [<NSFW, click to see>](1300/previews/pattern_3.png) | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/eureka_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/pattern_2.png) | [<NSFW, click to see>](1200/previews/pattern_3.png) | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/eureka_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/pattern_2.png) | [<NSFW, click to see>](1100/previews/pattern_3.png) | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/eureka_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/pattern_2.png) | [<NSFW, click to see>](1000/previews/pattern_3.png) | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/eureka_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/pattern_2.png) | [<NSFW, click to see>](900/previews/pattern_3.png) | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/eureka_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/pattern_2.png) | [<NSFW, click to see>](800/previews/pattern_3.png) | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/eureka_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/pattern_2.png) | [<NSFW, click to see>](700/previews/pattern_3.png) | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/eureka_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/pattern_2.png) | [<NSFW, click to see>](600/previews/pattern_3.png) | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/eureka_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/pattern_2.png) | [<NSFW, click to see>](500/previews/pattern_3.png) | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/eureka_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/pattern_2.png) | [<NSFW, click to see>](400/previews/pattern_3.png) | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/eureka_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/pattern_2.png) | [<NSFW, click to see>](300/previews/pattern_3.png) | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/eureka_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/pattern_2.png) | [<NSFW, click to see>](200/previews/pattern_3.png) | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/eureka_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/pattern_2.png) | [<NSFW, click to see>](100/previews/pattern_3.png) | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/eureka_pokemon.zip) |
Wiam/wav2vec2-base-finetuned-ravdess
Wiam
2023-08-17T15:58:16Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-16T15:36:37Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ravdess results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ravdess This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8783 - Accuracy: 0.7535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 2.0739 | 0.1562 | | 2.0781 | 2.0 | 18 | 2.0611 | 0.1181 | | 2.0668 | 3.0 | 27 | 2.0308 | 0.2535 | | 2.0429 | 4.0 | 36 | 1.9606 | 0.2604 | | 1.974 | 5.0 | 45 | 1.8449 | 0.2847 | | 1.8594 | 6.0 | 54 | 1.7678 | 0.2917 | | 1.7675 | 7.0 | 63 | 1.7700 | 0.2708 | | 1.6932 | 8.0 | 72 | 1.6049 | 0.3889 | | 1.5656 | 9.0 | 81 | 1.5510 | 0.4444 | | 1.4658 | 10.0 | 90 | 1.4535 | 0.4583 | | 1.4658 | 11.0 | 99 | 1.4101 | 0.4514 | | 1.3843 | 12.0 | 108 | 1.3687 | 0.5 | | 1.3085 | 13.0 | 117 | 1.3333 | 0.5035 | | 1.2264 | 14.0 | 126 | 1.3208 | 0.5208 | | 1.1349 | 15.0 | 135 | 1.3048 | 0.5312 | | 1.0861 | 16.0 | 144 | 1.2428 | 0.5799 | | 0.9836 | 17.0 | 153 | 1.1886 | 0.5799 | | 0.9273 | 18.0 | 162 | 1.1574 | 0.6146 | | 0.8686 | 19.0 | 171 | 1.1356 | 0.6111 | | 0.814 | 20.0 | 180 | 1.1261 | 0.6285 | | 0.814 | 21.0 | 189 | 1.0796 | 0.6007 | | 0.7279 | 22.0 | 198 | 1.0277 | 0.6493 | | 0.6845 | 23.0 | 207 | 1.0408 | 0.6840 | | 0.6283 | 24.0 | 216 | 0.9708 | 0.7153 | | 0.5835 | 25.0 | 225 | 0.9926 | 0.6875 | | 0.5445 | 26.0 | 234 | 1.0126 | 0.6840 | | 0.497 | 27.0 | 243 | 0.9502 | 0.6979 | | 0.4508 | 28.0 | 252 | 0.9432 | 0.7118 | | 0.4331 | 29.0 | 261 | 0.9246 | 0.7014 | | 0.4023 | 30.0 | 270 | 0.9649 | 0.6875 | | 0.4023 | 31.0 | 279 | 0.9114 | 0.7049 | | 0.3924 | 32.0 | 288 | 0.9460 | 0.7118 | | 0.3797 | 33.0 | 297 | 0.9605 | 0.7118 | | 0.3494 | 34.0 | 306 | 0.8505 | 0.7396 | | 0.3195 | 35.0 | 315 | 0.8830 | 0.7188 | | 0.3148 | 36.0 | 324 | 0.9352 | 0.7014 | | 0.2856 | 37.0 | 333 | 0.8551 | 0.7292 | | 0.2831 | 38.0 | 342 | 0.8505 | 0.7326 | | 0.2718 | 39.0 | 351 | 0.8800 | 0.7396 | | 0.2624 | 40.0 | 360 | 0.8991 | 0.7153 | | 0.2624 | 41.0 | 369 | 0.8724 | 0.7465 | | 0.2612 | 42.0 | 378 | 0.9138 | 0.7049 | | 0.2511 | 43.0 | 387 | 0.8914 | 0.7257 | | 0.2324 | 44.0 | 396 | 0.8783 | 0.7535 | | 0.2228 | 45.0 | 405 | 0.9215 | 0.7188 | | 0.2244 | 46.0 | 414 | 0.8904 | 0.7431 | | 0.2192 | 47.0 | 423 | 0.9142 | 0.7326 | | 0.217 | 48.0 | 432 | 0.8891 | 0.7361 | | 0.2146 | 49.0 | 441 | 0.9009 | 0.7326 | | 0.215 | 50.0 | 450 | 0.8994 | 0.7361 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
sid/a2c-PandaReachDense-v2
sid
2023-08-17T15:57:26Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-07-03T05:25:54Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.46 +/- 0.86 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
nacielo/wav2GPT2MusiNewStricD3E5
nacielo
2023-08-17T15:48:33Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-17T12:22:40Z
--- base_model: '' tags: - generated_from_trainer metrics: - rouge model-index: - name: wav2GPT2MusiNewStricD3E5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2GPT2MusiNewStricD3E5 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9268 - Rouge1: 31.1418 - Rouge2: 9.8004 - Rougel: 23.2508 - Rougelsum: 23.2708 - Gen Len: 64.93 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6001 | 1.0 | 1361 | 2.2566 | 22.0988 | 6.1329 | 16.1614 | 16.1192 | 87.75 | | 2.3044 | 2.0 | 2722 | 2.0828 | 26.1764 | 8.6856 | 19.2354 | 19.1702 | 74.46 | | 2.1894 | 3.0 | 4083 | 1.9912 | 29.7982 | 9.264 | 22.2165 | 22.193 | 67.71 | | 2.119 | 4.0 | 5444 | 1.9419 | 30.1668 | 9.2004 | 22.5359 | 22.5969 | 63.31 | | 2.0963 | 5.0 | 6805 | 1.9268 | 31.1418 | 9.8004 | 23.2508 | 23.2708 | 64.93 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.2 - Tokenizers 0.13.3
KingKazma/xsum_gpt2_lora_500_4_50000_8_e2_s6789_v4_l4_r4
KingKazma
2023-08-17T15:25:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T15:25:18Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
wrice/wavlm-large-timit-punctuation
wrice
2023-08-17T15:23:04Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wavlm", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-26T03:13:09Z
--- tags: - generated_from_trainer model-index: - name: wavlm-large-timit-punctuation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-large-timit-punctuation This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3368 - Wer: 0.2601 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.2379 | 1.0 | 500 | 3.1228 | 1.0 | | 2.5847 | 2.01 | 1000 | 1.1550 | 0.9147 | | 1.0034 | 3.01 | 1500 | 0.5856 | 0.5180 | | 0.5868 | 4.02 | 2000 | 0.4238 | 0.4229 | | 0.3892 | 5.02 | 2500 | 0.3356 | 0.3665 | | 0.2926 | 6.02 | 3000 | 0.3196 | 0.3360 | | 0.2294 | 7.03 | 3500 | 0.3046 | 0.3170 | | 0.1976 | 8.03 | 4000 | 0.3032 | 0.3111 | | 0.1644 | 9.04 | 4500 | 0.2946 | 0.2954 | | 0.1574 | 10.04 | 5000 | 0.3211 | 0.2998 | | 0.1391 | 11.04 | 5500 | 0.2986 | 0.2922 | | 0.1124 | 12.05 | 6000 | 0.2948 | 0.2837 | | 0.1003 | 13.05 | 6500 | 0.2928 | 0.2788 | | 0.1031 | 14.06 | 7000 | 0.3230 | 0.2805 | | 0.0901 | 15.06 | 7500 | 0.3081 | 0.2749 | | 0.0842 | 16.06 | 8000 | 0.3075 | 0.2726 | | 0.0809 | 17.07 | 8500 | 0.3215 | 0.2717 | | 0.0747 | 18.07 | 9000 | 0.3272 | 0.2721 | | 0.0735 | 19.08 | 9500 | 0.3242 | 0.2684 | | 0.0631 | 20.08 | 10000 | 0.3216 | 0.2640 | | 0.0632 | 21.08 | 10500 | 0.3149 | 0.2646 | | 0.0625 | 22.09 | 11000 | 0.3196 | 0.2630 | | 0.0611 | 23.09 | 11500 | 0.3244 | 0.2638 | | 0.0532 | 24.1 | 12000 | 0.3271 | 0.2641 | | 0.0503 | 25.1 | 12500 | 0.3368 | 0.2636 | | 0.0534 | 26.1 | 13000 | 0.3393 | 0.2627 | | 0.049 | 27.11 | 13500 | 0.3389 | 0.2626 | | 0.0441 | 28.11 | 14000 | 0.3375 | 0.2605 | | 0.0522 | 29.12 | 14500 | 0.3368 | 0.2601 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.8.2+cu111 - Datasets 1.17.0 - Tokenizers 0.11.6
baoxianJia/distilbert-base-uncased_emotion_ft_0416
baoxianJia
2023-08-17T15:20:59Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-16T16:48:19Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - precision model-index: - name: distilbert-base-uncased_emotion_ft_0416 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.94 - name: F1 type: f1 value: 0.9401141292598768 - name: Precision type: precision value: 0.9155632268416785 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0416 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1501 - Accuracy: 0.94 - F1: 0.9401 - Precision: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.8008 | 1.0 | 250 | 0.2889 | 0.9135 | 0.9128 | 0.8981 | | 0.2174 | 2.0 | 500 | 0.1820 | 0.935 | 0.9356 | 0.9030 | | 0.1442 | 3.0 | 750 | 0.1626 | 0.937 | 0.9376 | 0.9105 | | 0.1105 | 4.0 | 1000 | 0.1501 | 0.94 | 0.9401 | 0.9156 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
MRM2/ppo-LunarLander-v2
MRM2
2023-08-17T15:19:04Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T15:18:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.12 +/- 16.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bvboca/trainedlora1
bvboca
2023-08-17T15:14:21Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T15:14:16Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
linoyts/lora-xl-3d-icon-0.0001-1500-1-5
linoyts
2023-08-17T15:08:14Z
4
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T14:30:39Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: sdxl3dicon style icon tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/lora-xl-3d-icon-0.0001-1500-1-5 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on sdxl3dicon style icon using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
unum-cloud/uform-coreml-onnx
unum-cloud
2023-08-17T15:05:49Z
0
7
null
[ "onnx", "en", "de", "es", "fr", "it", "ja", "ko", "pl", "ru", "tr", "zh", "ar", "license:apache-2.0", "region:us" ]
null
2023-08-07T11:48:48Z
--- license: apache-2.0 language: - en - de - es - fr - it - ja - ko - pl - ru - tr - zh - ar --- <h1 align="center">UForm</h1> <h3 align="center"> Multi-Modal Inference Library<br/> For Semantic Search Applications<br/> </h3> --- UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space! This is the repository of [English](https://huggingface.co/unum-cloud/uform-vl-english/tree/main) and [multilingual](https://huggingface.co/unum-cloud/uform-vl-multilingual) UForm models converted to CoreML MLProgram format. Currently, only __unimodal__ parts of models are converted. ## Description Each model is separated into two parts: `image-encoder` and `text-encoder`: * English image-encoder: [english.image-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/english.image-encoder.mlpackage.zip) * English text-encoder: [english.text-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/english.text-encoder.mlpackage.zip) * Multilingual image-encoder: [multilingual.image-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.image-encoder.mlpackage.zip) * Multilingual text-encoder: [multilingual.text-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.text-encoder.mlpackage.zip) * Multilingual-v2 image-encoder: [multilingual-v2.image-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual-v2.image-encoder.mlpackage.zip) * Multilingual-v2 text-encoder: [multilingual-v2.text-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.text-encoder.mlpackage.zip) * Onnx Multilingual image-encoder: [multilingual.image-encoder.onnx](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.image-encoder.onnx) * Onnx Multilingual text-encoder: [multilingual.text-encoder.onnx](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.text-encoder.onnx) Each checkpoint is a zip archive with an MLProgram of the corresponding encoder. Text encoders have the following input fields: * `input_ids`: int32 * `attention_mask`: int32 and support flexible batch size. Image encoders has a single input field `image`: float32 and support only batch of single image (due to CoreML bug). Both encoders return: * `features`: float32 * `embeddings`: float32 If you want to convert a model with other parameters (i.e fp16 precision or other batch size range), you can use [convert.py](https://huggingface.co/unum-cloud/uform-coreml/blob/main/convert_model.py).
h3lmi/mpnet_maxpool2
h3lmi
2023-08-17T14:59:22Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-17T10:39:08Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 2365 with parameters: ``` {'batch_size': 32} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 709, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Vedmani/output
Vedmani
2023-08-17T14:54:39Z
0
0
null
[ "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "region:us" ]
null
2023-08-17T12:22:51Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2749 - Accuracy: 0.9364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2399 | 1.0 | 2500 | 0.2539 | 0.9037 | | 0.2454 | 2.0 | 5000 | 0.2753 | 0.9064 | | 0.2251 | 3.0 | 7500 | 0.2436 | 0.9167 | | 0.1996 | 4.0 | 10000 | 0.2271 | 0.9246 | | 0.1845 | 5.0 | 12500 | 0.2116 | 0.9269 | | 0.205 | 6.0 | 15000 | 0.1946 | 0.9312 | | 0.1352 | 7.0 | 17500 | 0.2233 | 0.9328 | | 0.1306 | 8.0 | 20000 | 0.2257 | 0.936 | | 0.0849 | 9.0 | 22500 | 0.2582 | 0.9372 | | 0.0609 | 10.0 | 25000 | 0.2749 | 0.9364 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
lrthomps/a2c-PandaReachDense-v2
lrthomps
2023-08-17T14:52:20Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-05-16T19:27:52Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.38 +/- 0.85 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ``` Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
CyberHarem/izumi_pokemon
CyberHarem
2023-08-17T14:48:10Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/izumi_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T14:44:49Z
--- license: mit datasets: - CyberHarem/izumi_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of izumi_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/izumi_pokemon.pt` as the embedding and `1500/izumi_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `izumi_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/izumi_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/izumi_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/izumi_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/izumi_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/izumi_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/izumi_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/izumi_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/izumi_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/izumi_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/izumi_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/izumi_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/izumi_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/izumi_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/izumi_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/izumi_pokemon.zip) |
Marco-Cheung/speecht5_finetuned_voxpopuli_de
Marco-Cheung
2023-08-17T14:46:13Z
86
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-08-17T08:02:16Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer - text-to-speech datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_de This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4657 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5308 | 1.66 | 1000 | 0.4861 | | 0.5124 | 3.33 | 2000 | 0.4732 | | 0.5076 | 4.99 | 3000 | 0.4674 | | 0.5051 | 6.65 | 4000 | 0.4657 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
Inespinoza/ppo-Huggy
Inespinoza
2023-08-17T14:42:38Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-08-17T14:41:48Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Inespinoza/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DarkRodry/Reinforce-cartpole-v1
DarkRodry
2023-08-17T14:42:28Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T14:42:20Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Ignatt/sdxl-db-nachito
Ignatt
2023-08-17T14:42:11Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2023-08-17T14:42:08Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of nachito tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
HangenYuu/xlm-roberta-large-finetuned-hate-implicit
HangenYuu
2023-08-17T14:33:17Z
102
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:joeddav/xlm-roberta-large-xnli", "base_model:finetune:joeddav/xlm-roberta-large-xnli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T10:33:54Z
--- license: mit base_model: joeddav/xlm-roberta-large-xnli tags: - generated_from_trainer model-index: - name: xlm-roberta-large-finetuned-hate-implicit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-hate-implicit This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6051 - eval_accuracy: 0.7768 - eval_f1: 0.7721 - eval_runtime: 107.6127 - eval_samples_per_second: 39.921 - eval_steps_per_second: 0.316 - epoch: 3.98 - step: 537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
Nithin427/llama2-qlora-finetunined-french
Nithin427
2023-08-17T14:32:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-17T14:32:49Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
vikneshk/finetune_small_imdb_sentiment
vikneshk
2023-08-17T14:24:22Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-17T14:03:25Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetune_small_imdb_sentiment results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9083333333333333 - name: F1 type: f1 value: 0.9084249084249084 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune_small_imdb_sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2615 - Accuracy: 0.9083 - F1: 0.9084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
CyberHarem/joy_pokemon
CyberHarem
2023-08-17T14:22:33Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/joy_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T14:18:49Z
--- license: mit datasets: - CyberHarem/joy_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of joy_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/joy_pokemon.pt` as the embedding and `1500/joy_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `joy_pokemon`.** These are available steps: | Steps | bikini | free | nude | Download | |--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------| | 1500 | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/joy_pokemon.zip) | | 1400 | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/joy_pokemon.zip) | | 1300 | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/joy_pokemon.zip) | | 1200 | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/joy_pokemon.zip) | | 1100 | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/joy_pokemon.zip) | | 1000 | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/joy_pokemon.zip) | | 900 | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/joy_pokemon.zip) | | 800 | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/joy_pokemon.zip) | | 700 | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/joy_pokemon.zip) | | 600 | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/joy_pokemon.zip) | | 500 | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/joy_pokemon.zip) | | 400 | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/joy_pokemon.zip) | | 300 | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/joy_pokemon.zip) | | 200 | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/joy_pokemon.zip) | | 100 | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/joy_pokemon.zip) |
remg1997/xl-1.0
remg1997
2023-08-17T14:15:00Z
24
1
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "stable-diffusion", "arxiv:2307.01952", "arxiv:2211.01324", "arxiv:2108.01073", "arxiv:2112.10752", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-08-17T14:14:59Z
--- license: openrail++ tags: - text-to-image - stable-diffusion duplicated_from: stabilityai/stable-diffusion-xl-base-1.0 --- # SD-XL 1.0-base Model Card ![row01](01.png) ## Model ![pipeline](pipeline.png) [SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps. Note that the base model can be used as a standalone module. Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations. Source code is available at https://github.com/Stability-AI/generative-models . ### Model Description - **Developed by:** Stability AI - **Model type:** Diffusion-based text-to-image generative model - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)). - **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952). ### Model Sources For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time. [Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference. - **Repository:** https://github.com/Stability-AI/generative-models - **Demo:** https://clipdrop.co/stable-diffusion ## Evaluation ![comparison](comparison.png) The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. ### 🧨 Diffusers Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16") pipe.to("cuda") # if using torch < 2.0 # pipe.enable_xformers_memory_efficient_attention() prompt = "An astronaut riding a green horse" images = pipe(prompt=prompt).images[0] ``` To use the whole base + refiner pipeline as an ensemble of experts you can run: ```py from diffusers import DiffusionPipeline import torch # load both base & refiner base = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) base.to("cuda") refiner = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0", text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors=True, variant="fp16", ) refiner.to("cuda") # Define how many steps and what % of steps to be run on each experts (80/20) here n_steps = 40 high_noise_frac = 0.8 prompt = "A majestic lion jumping from a big stone at night" # run both experts image = base( prompt=prompt, num_inference_steps=n_steps, denoising_end=high_noise_frac, output_type="latent", ).images image = refiner( prompt=prompt, num_inference_steps=n_steps, denoising_start=high_noise_frac, image=image, ).images[0] ``` When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline: ```py pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload` instead of `.to("cuda")`: ```diff - pipe.to("cuda") + pipe.enable_model_cpu_offload() ``` For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl). ### Optimum [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/). #### OpenVINO To install Optimum with the dependencies required for OpenVINO : ```bash pip install optimum[openvino] ``` To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`. ```diff - from diffusers import StableDiffusionPipeline + from optimum.intel import OVStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = OVStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0] ``` You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl). #### ONNX To install Optimum with the dependencies required for ONNX Runtime inference : ```bash pip install optimum[onnxruntime] ``` To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`. ```diff - from diffusers import StableDiffusionPipeline + from optimum.onnxruntime import ORTStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0] ``` You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl). ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
CyberHarem/viola_pokemon
CyberHarem
2023-08-17T14:03:05Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/viola_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T13:56:27Z
--- license: mit datasets: - CyberHarem/viola_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of viola_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/viola_pokemon.pt` as the embedding and `1500/viola_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `viola_pokemon`.** These are available steps: | Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download | |--------:|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------| | 1500 | ![pattern_1-1500](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/pattern_2.png) | [<NSFW, click to see>](1500/previews/pattern_3.png) | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/viola_pokemon.zip) | | 1400 | ![pattern_1-1400](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/pattern_2.png) | [<NSFW, click to see>](1400/previews/pattern_3.png) | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/viola_pokemon.zip) | | 1300 | ![pattern_1-1300](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/pattern_2.png) | [<NSFW, click to see>](1300/previews/pattern_3.png) | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/viola_pokemon.zip) | | 1200 | ![pattern_1-1200](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/pattern_2.png) | [<NSFW, click to see>](1200/previews/pattern_3.png) | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/viola_pokemon.zip) | | 1100 | ![pattern_1-1100](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/pattern_2.png) | [<NSFW, click to see>](1100/previews/pattern_3.png) | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/viola_pokemon.zip) | | 1000 | ![pattern_1-1000](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/pattern_2.png) | [<NSFW, click to see>](1000/previews/pattern_3.png) | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/viola_pokemon.zip) | | 900 | ![pattern_1-900](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/pattern_2.png) | [<NSFW, click to see>](900/previews/pattern_3.png) | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/viola_pokemon.zip) | | 800 | ![pattern_1-800](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/pattern_2.png) | [<NSFW, click to see>](800/previews/pattern_3.png) | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/viola_pokemon.zip) | | 700 | ![pattern_1-700](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/pattern_2.png) | [<NSFW, click to see>](700/previews/pattern_3.png) | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/viola_pokemon.zip) | | 600 | ![pattern_1-600](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/pattern_2.png) | [<NSFW, click to see>](600/previews/pattern_3.png) | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/viola_pokemon.zip) | | 500 | ![pattern_1-500](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/pattern_2.png) | [<NSFW, click to see>](500/previews/pattern_3.png) | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/viola_pokemon.zip) | | 400 | ![pattern_1-400](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/pattern_2.png) | [<NSFW, click to see>](400/previews/pattern_3.png) | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/viola_pokemon.zip) | | 300 | ![pattern_1-300](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/pattern_2.png) | [<NSFW, click to see>](300/previews/pattern_3.png) | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/viola_pokemon.zip) | | 200 | ![pattern_1-200](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/pattern_2.png) | [<NSFW, click to see>](200/previews/pattern_3.png) | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/viola_pokemon.zip) | | 100 | ![pattern_1-100](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/pattern_2.png) | [<NSFW, click to see>](100/previews/pattern_3.png) | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/viola_pokemon.zip) |
SUPERSOKOL/marian-finetuned-kde4-en-to-uk
SUPERSOKOL
2023-08-17T13:59:11Z
126
2
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-08-17T11:28:22Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-uk results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-uk split: train args: en-uk metrics: - name: Bleu type: bleu value: 50.09005982889118 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-uk This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-uk](https://huggingface.co/Helsinki-NLP/opus-mt-en-uk) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.7624 - Bleu: 50.0901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
viswavi/datafinder-scibert-nl-queries
viswavi
2023-08-17T13:54:02Z
116
1
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "fill-mask", "arxiv:2305.16636", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-07T19:40:25Z
--- license: mit pipeline_tag: fill-mask --- This is a version of the SciBERT encoder trained for the purpose of retrieving datasets by textual description given a natural language query. If useful, please cite ``` @inproceedings{viswanathan23acl, title = {DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions}, author = {Vijay Viswanathan and Luyu Gao and Tongshuang Wu and Pengfei Liu and Graham Neubig}, booktitle = {Annual Conference of the Association for Computational Linguistics (ACL)}, address = {Toronto, Canada}, month = {July}, url = {https://arxiv.org/abs/2305.16636}, year = {2023} } ```
CyberHarem/furisode_girl_pokemon
CyberHarem
2023-08-17T13:40:46Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/furisode_girl_pokemon", "license:mit", "region:us" ]
text-to-image
2023-08-17T13:35:04Z
--- license: mit datasets: - CyberHarem/furisode_girl_pokemon pipeline_tag: text-to-image tags: - art --- # Lora of furisode_girl_pokemon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/furisode_girl_pokemon.pt` as the embedding and `1500/furisode_girl_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `furisode_girl_pokemon`.** These are available steps: | Steps | pattern_1 | bikini | free | nude | Download | |--------:|:----------------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-------------------------------------------| | 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/furisode_girl_pokemon.zip) | | 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/furisode_girl_pokemon.zip) | | 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/furisode_girl_pokemon.zip) | | 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/furisode_girl_pokemon.zip) | | 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/furisode_girl_pokemon.zip) | | 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/furisode_girl_pokemon.zip) | | 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/furisode_girl_pokemon.zip) | | 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/furisode_girl_pokemon.zip) | | 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/furisode_girl_pokemon.zip) | | 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/furisode_girl_pokemon.zip) | | 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/furisode_girl_pokemon.zip) | | 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/furisode_girl_pokemon.zip) | | 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/furisode_girl_pokemon.zip) | | 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/furisode_girl_pokemon.zip) | | 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/furisode_girl_pokemon.zip) |
paarth-sachan/taxi_gymnasium
paarth-sachan
2023-08-17T13:38:05Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T13:38:03Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi_gymnasium results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="paarth-sachan/taxi_gymnasium", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
paarth-sachan/q-FrozenLake-v1-4x4-noSlippery
paarth-sachan
2023-08-17T13:34:00Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-17T13:33:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="paarth-sachan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
linoyts/lora-trained-xl-colab-3d-icon-0.0001-1500-1
linoyts
2023-08-17T13:33:53Z
6
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-17T08:46:49Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: sdxl3dicon digital art tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/lora-trained-xl-colab-3d-icon-0.0001-1500-1 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on sdxl3dicon digital art using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.