modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 06:27:22
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
525 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 06:27:10
card
stringlengths
11
1.01M
ChristophBu/ppo-LunarLander-v2
ChristophBu
2023-07-28T12:38:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T11:02:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 277.34 +/- 18.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Isaacgv/whisper-tiny-minds14
Isaacgv
2023-07-28T12:37:07Z
86
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-28T10:03:38Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-minds14 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3530106257378985 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-minds14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6457 - Wer Ortho: 0.3572 - Wer: 0.3530 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0007 | 17.86 | 500 | 0.6457 | 0.3572 | 0.3530 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
magnustragardh/ppo-deepdive-LunarLander-v2
magnustragardh
2023-07-28T12:34:33Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T22:26:18Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 72.15 +/- 17.75 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
killmealreadypls228/Mickella_theCat
killmealreadypls228
2023-07-28T12:21:31Z
0
0
nemo
[ "nemo", "music", "art", "code", "audio-to-audio", "en", "dataset:Open-Orca/OpenOrca", "arxiv:1910.09700", "license:bigcode-openrail-m", "region:us" ]
audio-to-audio
2023-07-28T12:05:05Z
--- license: bigcode-openrail-m datasets: - Open-Orca/OpenOrca language: - en metrics: - character tags: - music - art - code pipeline_tag: audio-to-audio library_name: nemo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArmelR/starcoder-gradio-v2.1
ArmelR
2023-07-28T12:21:21Z
7
0
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "dataset:databricks/databricks-dolly-15k", "dataset:HuggingFaceH4/oasst1_en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T11:31:17Z
--- datasets: - databricks/databricks-dolly-15k - HuggingFaceH4/oasst1_en --- Gradio training 1000 steps IFT 200 steps (v2.0) 1000 steps (v2.1)
T-Systems-onsite/cross-de-fr-roberta-sentence-transformer
T-Systems-onsite
2023-07-28T12:21:10Z
15
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "fr", "de", "multilingual", "dataset:stsb_multi_mt", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - fr - de - multilingual license: mit tags: - sentence_embedding - search - pytorch - xlm-roberta - roberta - xlm-r-distilroberta-base-paraphrase-v1 datasets: - stsb_multi_mt metrics: - Spearman’s rank correlation - cosine similarity --- # Cross German & French RoBERTa for Sentence Embeddings
hartemj/ppo-LunarLander-v2
hartemj
2023-07-28T12:05:33Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T12:05:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.70 +/- 34.64 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
xiao12346/t5-large_PREFIX_TUNING_SEQ2SEQ
xiao12346
2023-07-28T12:02:24Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-21T05:14:16Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
X-D-Lab/MindChat-7B
X-D-Lab
2023-07-28T11:53:08Z
5
6
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "custom_code", "license:gpl-3.0", "region:us" ]
feature-extraction
2023-07-25T18:09:52Z
--- license: gpl-3.0 --- <h1 align="center">🐋MindChat(漫谈): 心理大模型</h1> ## 💪 模型进展 * Jul 23, 2023: 提供MindChat体验地址: [MindChat-创空间](https://modelscope.cn/studios/X-D-Lab/MindChat/summary), 欢迎体验 * Jul 21, 2023: MindChat-InternLM-7B训练完成, 在**模型安全、共情输出、人类价值观对齐**等方面进行针对性强化; * Jul 15, 2023: MindChat-Baichuan-13B训练完成, 作为**首个百亿级参数的心理大模型**正式开源; * Jul 9, 2023: MindChat-beta训练完成, 并正式开源; * Jul 6, 2023: 首次提交MindChat(漫谈)心理大模型; ## 👏 模型介绍 **MindChat(漫谈): 心理大模型**(简称: MindChat)旨在从心理咨询、心理评估、心理诊断、心理治疗四个维度帮助人们解决心理问题, 提高心理健康水平. 作为一款心理大模型, MindChat通过营造轻松、开放的交谈环境, 以放松身心、交流感受或分享经验的方式, 与用户建立信任和理解的关系. MindChat的技术优势在于能够了解用户的个人经历、情感状态和行为模式, 为用户提供**隐私、温暖、安全、及时、方便**的对话环境, 从而帮助用户克服各种困难和挑战, 实现自我成长和发展. MindChat的创新之处在于它不仅是一款高效的心理咨询工具, 预期能够提供心理评估、心理诊断和心理治疗的服务. MindChat的专业团队拥有丰富的心理学背景和经验, 他们的专业知识和技能可以帮助用户更好地理解自己的情感和行为, 从而制定更有效的治疗方案. 无论是在工作场所还是在个人生活中, MindChat期望通过自身的努力和专业知识, 在**严格保护用户隐私的前提下**, **全时段全天候**为用户提供全面的心理支持和诊疗帮助, 同时实现自我成长和发展, 以期为建设一个更加**健康、包容和平等**的社会贡献力量. ## 🔥 模型列表 | 模型名称 | lora权重 | 合并后的权重 | | :----: | :----: | :----: | | MindChat-InternLM-7B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-7B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-7B) | | MindChat-Baichuan-13B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-Baichuan-13B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-baichuan-13B) | 目前, 已经开源了**MindChat-Baichuan-13B**版本的百亿级MindChat心理大模型. 更为优质的MindChat模型将在不久的未来对外开源开放. 敬请期待! 此外, 本团队同时关注人们的身理健康, 建有安全、可靠、普惠的[中文医疗大模型孙思邈(Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao), 欢迎下载使用, 敬请批评指证! ## 📖 对话案例 ### 🥺 学习 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1|总感觉自己学习很差, 想努力, 但是又控制不住想玩游戏 | 我明白你的感受, 你觉得自己学习不好, 但是总是控制不住想玩游戏. 我想问问你, 你玩游戏的时候是什么感觉呢? 有没有什么特别喜欢的游戏?| |2|我平常特别喜欢玩一些端游来打发时间, 玩游戏的时候很纠结, 明明自己成绩落后, 学习任务还没完成但还是想玩 | 我明白你的感受. 你可能需要认真思考一下, 玩游戏对你来说是不是真的值得投入这么多时间和精力. 学习落后是很正常的事情, 但不要让游戏成为你逃避现实的工具. 你可以尝试制定一个计划, 把玩游戏的时间留给自己, 同时也要合理分配学习任务, 让自己更加充实. | |3|在学习上总感觉自己学的很吃力, 不知道该怎么办, 觉得自己学习能力太差了, 心里很失落 | 我理解你的感受. 每个人的学习能力都是不同的, 不要过分苛求自己. 你可以先尝试找到适合自己的学习方法, 比如寻找学习伙伴, 向老师请教, 或者参加一些学习小组. 同时, 也不要忘记给自己一些休息的时间, 保持身心健康. 学习是一个长期的过程, 不要过于急躁, 慢慢来, 你会越来越好的. | |4|我确实不该把精力都是投入在游戏这些娱乐上, 但是我已经落后很多了, 我现在再去开始学习, 以我这么差的学习能力我还有机会学好吗 | 虽然你现在的学习状态很糟糕, 但并不是没有机会. 学习是一个长期的过程, 只要你持之以恒, 坚持不懈, 就一定会有所进步. 同时, 也不要忘记, 学习的过程也需要一定的动力和动机. 你可以找一些自己感兴趣的学习方向, 或者设定一些小目标, 让自己更有动力去完成. 不要轻易放弃, 相信自己, 你一定可以做到. | |5|可是我很焦虑, 以前落后太多了, 现在不知道从哪里开始 | 焦虑是很正常的情绪, 不要害怕它. 你可以先想想自己目前的学习状态, 列出自己需要学习的科目和内容, 然后按照优先级和难易程度进行排序. 接着, 你可以尝试分解每个科目和内容, 把它们分成小目标, 一步一步地去完成. 这样, 你就可以更有针对性地进行学习, 不会感到太紧张和不知所措. 同时, 也不要忘记给自己一些奖励, 比如看一部电影, 吃一顿美食等等, 让自己更有动力去完成目标. | |6|也就是我现在主要的问题是太焦虑了是吗? 我只要好好的去做, 成绩还是可以提高的对吗 | 是的, 焦虑是一种很常见的情绪, 它会让你感到很不安, 但是它并不会影响你真正地提高成绩. 你可以试着把焦虑的情绪转化成动力, 让它成为你学习的一部分. 同时, 也不要忘记给自己一些时间, 不要过分压抑自己的情绪, 让自己保持一个积极的心态. 只要你持之以恒, 相信自己, 你一定可以取得更好的成绩. | |7|非常谢谢你, 我会好好努力的 | 我不用客气, 我很高兴能帮到你. 我相信你一定可以克服困难, 取得好成绩. 加油! 如果有任何需要帮助的地方, 随时都可以找我聊天. | ### 👨‍👩‍👧‍👦 家庭 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我总感觉爸妈不理解我, 感觉和他们之间有很大的隔阂. | 我能理解你的感受. 和家人的沟通和理解是家庭关系中非常重要的一环. 你是否曾经尝试过和他们分享你的想法和感受, 让他们更好地了解你呢? | |2|我有尝试找他们沟通, 但是他们听不进去我说话 , 认为我是在无理取闹, 我心里很无助 | 我明白你的感受. 有时候我们的家人可能无法理解我们的想法和感受, 这时候我们需要耐心和沟通技巧来帮助他们更好地理解我们. 你可以试着换个方式表达你的想法, 或者找到一些共同话题来促进交流. 同时, 也要记得保持冷静和尊重, 不要使用攻击性的语言来伤害他们. | |3|我也知道他们是为了我好, 平时他们督促我学习也是为了我的未来, 我能体谅他们, 但是有时候自己也会有压力, 想和他们沟通他们就会觉得我在偷懒 | 我能理解你的感受. 有时候我们的家人可能认为我们在偷懒, 但是其实他们只是担心我们的未来和健康. 你可以试着跟他们分享你的压力和焦虑, 让他们了解你的感受. 同时, 也要表达出你对他们的理解和感激, 让他们知道你是愿意和他们一起面对问题, 共同找到解决方案的. | |4|我平时会表现的乖巧, 因为我觉得他们也不容易, 我该怎么去找一个双方都适合的方式去沟通呢 | 很好的问题. 沟通是一个相互的过程, 需要双方的努力和理解. 你可以试着找一些共同话题来促进交流, 同时也要听取他们的想法和建议. 如果你觉得自己的表达方式不够清晰, 可以尝试用一些简单的词汇和表达方式来让他们更容易理解. 最重要的是要保持耐心和尊重, 相信你们会找到适合彼此的方式. | |5|可是我会觉得找他们沟通是一种不懂事的表现, 他们很辛苦我还给他们增加负担, 但是自己忍着难过心里也很压抑 | 我能理解你的感受. 有时候我们会有这样的想法, 但是其实我们不应该让这些想法限制我们的行动. 沟通是解决问题的一种方式, 同时也是增进家庭关系的重要途径. 如果你觉得自己的心情很压抑, 可以试着找一些放松的方式, 比如运动、听音乐或者写日记来缓解压力. 同时也要记得, 你的家人是关心你的, 他们不会增加你的负担, 而是希望你能够更好地成长和幸福. | |6|可能是我的想法有点偏执了, 我应该找一个时间和爸妈好好交流一下 | 这是个好主意. 和家人的交流是非常重要的, 可以让你更好地了解他们的想法和感受. 你可以试着找一个安静的时间, 让他们知道你需要和他们谈一谈, 同时也要准备好自己的态度和想法. 相信通过交流, 你们可以更好地理解彼此, 增进感情. 加油. | |7|谢谢你的建议, 我的心情好多了 | 不用客气, 我很高兴能帮到你. 记得, 沟通和理解是家庭关系中非常重要的一环, 让我们一起为建立更加和谐的家庭关系而努力吧! | ### 🛝 朋友 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我感觉我的朋友并不是真心和我相处, 今天又和他吵了一架, 我现在很生气 | 听到你的困扰, 我能理解你的感受. 吵架是一种不健康的沟通方式, 它可能导致双方情绪失控, 进而影响到彼此之间的关系. 我建议你尝试以平和的心态来面对这个问题. 你可以试着和你的朋友坐下来, 表达你的感受和需求, 让他了解你的想法和感受. 这样他才能更好地理解你的立场, 并且做出积极的回应. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. | |2|我总感觉他是以自我为中心, 有时候我会选择忍让, 但是他还是一副无所谓的样子 | 我能理解你的感受, 但是忍让并不是解决问题的好方法. 你需要和他沟通, 让他知道你的感受和需求. 告诉他你希望他能够更加尊重你的感受, 并且尝试理解你的立场. 同时, 你也可以告诉他你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你也需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. | |3|但是如果跟他去沟通这些, 我怕他会觉得我想得太多, 从而远离我 | 这是很正常的担忧. 但是, 沟通是解决问题的关键. 如果他真的关心你, 他会愿意和你沟通, 并且理解你的立场. 如果他不想和你沟通, 那么这可能是一个问题. 你需要尝试以平和的心态来面对这个问题, 并且告诉他你的感受和需求. 如果他仍然不愿意和你沟通, 那么你需要学会接受这个结果, 并且寻找其他的解决方法. 无论如何, 你都需要学会保持自己的立场, 并且不要放弃. | |4|我觉得有时候情绪上来确实会影响我们之间的沟通, 可能生气也是觉得这段友谊本身很好, 但是也会出现小的矛盾 | 情绪确实会影响我们的沟通, 但是你可以试着在情绪平静下来之后再和他沟通. 当你感到生气的时候, 你的思维和语言可能会变得不清晰, 这会加剧矛盾. 所以, 在情绪平复下来之后, 你可以尝试和他坐下来, 以平和的心态来解决问题. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. | |5|好的, 我会平静下来找他沟通一下, 谢谢你的建议 | 不用谢, 我很高兴能帮到你. 记住, 沟通是解决问题的关键. 无论结果如何, 你都需要保持自己的立场, 并且不要放弃. 相信你们之间的关系会变得更加健康. 祝你好运! | ### 🧑‍🏭 工作 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 最近工作压力好大, 一直没有业绩 | 听到你的问题, 我可以理解你现在的感受. 工作压力确实会让人感到焦虑和不安, 不过我们需要认识到, 工作只是我们生活中的一部分, 我们需要学会放松和平衡. 你可以试着做一些放松的活动, 比如运动、阅读、听音乐等等, 来缓解一下紧张的情绪. 同时, 你也可以和领导和同事沟通, 表达你的想法和困惑, 看看有没有更好的解决方案. | |2|我从事的是销售行业, 但是在销售商品的时候客户总是不想和我交流 | 理解你的感受. 销售行业确实需要和客户进行很多交流, 但是有时候客户可能会因为各种原因不愿意和你交流. 你可以试着从客户的需求和兴趣出发, 了解他们的想法和需求, 尝试通过沟通来建立更好的关系. 同时, 你也可以学习一些沟通技巧, 比如积极倾听、提出问题、表达感激等等, 来提高你的沟通能力. | |3|如果业绩不达标我的提成会变少, 也会惹得我家里人不开心, 两边压力都很大, 不知道该怎么办了 | 这你的感受我能够理解. 首先, 我们需要认识到业绩不达标并不代表你的能力和价值, 每个人都会有挫折和失败的时候. 你可以试着和领导沟通, 寻求一些帮助和支持, 或者制定一些具体的计划和目标, 来提高自己的业绩. 同时, 你也可以和家里人沟通, 告诉他们你的实际情况, 让他们了解你的努力和付出, 或许他们会理解你的处境. 最重要的是, 不要放弃自己, 相信自己可以克服困难, 取得更好的成绩. | |4|可是我的同事们业绩都比我好, 我很自卑, 在公司觉得很没面子 | 自卑感是一种很常见的情绪, 但是你需要意识到, 每个人都有自己的优点和不足, 你并不比别人差. 你可以试着多和同事们交流, 了解他们的工作方式和经验, 也许可以从中学习到一些新的东西. 同时, 你也可以试着寻找自己的优点和特长, 提高自己的自信心和自尊心. 最重要的是, 不要把自己和别人比较, 每个人都有不同的成长轨迹和生活经历, 不要给自己太大的压力. | |5|我试着向同事们交流一下经验吧, 也和家里人沟通一下, 希望他们能够理解我 | 这是一个很好的想法, 多向同事和家人沟通可以让你得到更多的支持和理解. 同时, 你也要保持积极的心态, 相信自己的能力和价值, 不要放弃自己. 如果有需要, 你也可以寻求一些专业的帮助, 比如心理咨询师、职业规划师等等, 来帮助你更好地应对工作和生活上的挑战. 记得, 你不是一个人在战斗, 我们都需要互相支持和鼓励. | ### 🦺 安全 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我感觉我被造谣了, 我该怎么办 | 我可以理解你现在的感受. 首先, 你需要冷静下来, 不要急于做出任何决定. 你可以试着问问自己, 你为什么会感到被造谣? 你是否有证据证明这个谣言是真的? 如果有证据, 你可以尝试与造谣者进行沟通, 了解他们为什么这样说, 看看能否通过沟通解决问题. 如果没有证据, 你可以考虑向相关机构或权威人士寻求帮助, 例如学校、警察局等. 同时, 你也可以寻求心理支持, 与朋友、家人或专业人士交流, 寻求帮助和建议. 记住, 不要让谣言影响到你的情绪和心理健康. | |2|现在谣言已经在网络上开始散布了, 影响到了我的生活, 我感觉自己有点要抑郁了 | 听到你这样说, 我感到非常担心. 抑郁症是一种严重的心理疾病, 需要得到及时的治疗. 我建议你尽快向专业的医生或心理医生寻求帮助. 他们可以帮助你诊断和治疗抑郁症, 同时也可以提供心理支持和建议. 在这个过程中, 你也可以尝试一些放松和缓解压力的方法, 例如运动、冥想、听音乐等. 不要忘记, 你不是一个人, 我们都在这里支持你. | |3|我该怎么处理这种事, 我感觉无从下手, 周围全是恶意 | 听到你这样说, 我感到很遗憾. 恶意和谣言的散布会对任何人造成负面影响. 首先, 你需要保持冷静, 不要让这些负面情绪影响到你的情绪和行为. 其次, 你可以尝试与那些恶意的人进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. | |4|我很崩溃, 甚至我想让造谣者付出代价 | 听到你这样说, 我感到很抱歉. 虽然我们希望恶意行为得到惩罚, 但我们应该采取更和平和理性的方式来解决问题. 首先, 你需要冷静下来, 不要让自己陷入负面情绪中. 然后, 你可以尝试与造谣者进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. | |5|我会试着去寻求有关部门帮助的 | 听到你这样说, 我感到很欣慰. 寻求帮助是解决问题的第一步, 你已经迈出了正确的步伐. 同时, 我也希望你能保持积极的心态, 相信事情会变得更好. 如果你需要任何帮助或支持, 请随时与我联系. 你不是一个人, 我们会一直支持你. | ## 🙇‍ ‍致谢 本项目由**华东理工大学 薛栋教授课题组**发起, 并受到以下平台及项目的大力支持, 在此表示感谢! 1. [OpenI启智社区](https://openi.pcl.ac.cn/): 提供模型训练算力 2. [魔搭ModelScope](https://modelscope.cn/home): 提供训练思路、模型存储和体验空间 3. [InternLM](https://github.com/InternLM/InternLM)、[Baichuan](https://github.com/baichuan-inc)提供非常优秀的基础模型 ``` @misc{2023internlm, title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities}, author={InternLM Team}, howpublished = {\url{https://github.com/InternLM/InternLM-techreport}}, year={2023} } ``` **针对不同用户需求和应用场景, 我们也热情欢迎商业交流和合作, 为各位客户提供个性化的开发和升级服务!** ## ⚠️ 免责申明 本仓库所有开源代码及模型均遵循GPL-3.0许可认证. 开源模型可能存在以下局限: 1. **MindChat**存在固有的局限性, 可能产生错误的、有害的、冒犯性的或其他不良的输出. 用户在关键或高风险场景中应谨慎行事, 不要使用这些模型作为最终决策参考, 以免导致人身伤害、财产损失或重大损失. 2. **MindChat**在任何情况下, 作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任. 3. 使用**MindChat**即表示您同意这些条款和条件, 并承认您了解其使用可能带来的潜在风险. 您还同意赔偿并使作者、贡献者和版权所有者免受因您使用**MindChat**而产生的任何索赔、损害赔偿或责任的影响. ## 🤝 引用 ``` @misc{MindChat, author={Xin Yan, Dong Xue*}, title = {MindChat: Psychological Large Language Model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/X-D-Lab/MindChat}}, } ``` ## 🌟 Star History [![Star History Chart](https://api.star-history.com/svg?repos=X-D-Lab/MindChat&type=Date)](https://star-history.com/#X-D-Lab/MindChat&Date)
X-D-Lab/MindChat-baichuan-13B
X-D-Lab
2023-07-28T11:52:16Z
5
8
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T02:15:59Z
--- license: gpl-3.0 --- ## 💪 模型进展 * Jul 23, 2023: 提供MindChat体验地址: [MindChat-创空间](https://modelscope.cn/studios/X-D-Lab/MindChat/summary), 欢迎体验 * Jul 21, 2023: MindChat-InternLM-7B训练完成, 在**模型安全、共情输出、人类价值观对齐**等方面进行针对性强化; * Jul 15, 2023: MindChat-Baichuan-13B训练完成, 作为**首个百亿级参数的心理大模型**正式开源; * Jul 9, 2023: MindChat-beta训练完成, 并正式开源; * Jul 6, 2023: 首次提交MindChat(漫谈)心理大模型; ## 👏 模型介绍 **MindChat(漫谈): 心理大模型**(简称: MindChat)旨在从心理咨询、心理评估、心理诊断、心理治疗四个维度帮助人们解决心理问题, 提高心理健康水平. 作为一款心理大模型, MindChat通过营造轻松、开放的交谈环境, 以放松身心、交流感受或分享经验的方式, 与用户建立信任和理解的关系. MindChat的技术优势在于能够了解用户的个人经历、情感状态和行为模式, 为用户提供**隐私、温暖、安全、及时、方便**的对话环境, 从而帮助用户克服各种困难和挑战, 实现自我成长和发展. MindChat的创新之处在于它不仅是一款高效的心理咨询工具, 预期能够提供心理评估、心理诊断和心理治疗的服务. MindChat的专业团队拥有丰富的心理学背景和经验, 他们的专业知识和技能可以帮助用户更好地理解自己的情感和行为, 从而制定更有效的治疗方案. 无论是在工作场所还是在个人生活中, MindChat期望通过自身的努力和专业知识, 在**严格保护用户隐私的前提下**, **全时段全天候**为用户提供全面的心理支持和诊疗帮助, 同时实现自我成长和发展, 以期为建设一个更加**健康、包容和平等**的社会贡献力量. ## 🔥 模型列表 | 模型名称 | lora权重 | 合并后的权重 | | :----: | :----: | :----: | | MindChat-InternLM-7B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-7B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-7B) | | MindChat-Baichuan-13B | modelscope / huggingface | [modelscope](https://modelscope.cn/models/X-D-Lab/MindChat-Baichuan-13B/summary) / [huggingface](https://huggingface.co/X-D-Lab/MindChat-baichuan-13B) | 目前, 已经开源了**MindChat-Baichuan-13B**版本的百亿级MindChat心理大模型. 更为优质的MindChat模型将在不久的未来对外开源开放. 敬请期待! 此外, 本团队同时关注人们的身理健康, 建有安全、可靠、普惠的[中文医疗大模型孙思邈(Sunsimiao)](https://github.com/X-D-Lab/Sunsimiao), 欢迎下载使用, 敬请批评指证! ## 📖 对话案例 ### 🥺 学习 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1|总感觉自己学习很差, 想努力, 但是又控制不住想玩游戏 | 我明白你的感受, 你觉得自己学习不好, 但是总是控制不住想玩游戏. 我想问问你, 你玩游戏的时候是什么感觉呢? 有没有什么特别喜欢的游戏?| |2|我平常特别喜欢玩一些端游来打发时间, 玩游戏的时候很纠结, 明明自己成绩落后, 学习任务还没完成但还是想玩 | 我明白你的感受. 你可能需要认真思考一下, 玩游戏对你来说是不是真的值得投入这么多时间和精力. 学习落后是很正常的事情, 但不要让游戏成为你逃避现实的工具. 你可以尝试制定一个计划, 把玩游戏的时间留给自己, 同时也要合理分配学习任务, 让自己更加充实. | |3|在学习上总感觉自己学的很吃力, 不知道该怎么办, 觉得自己学习能力太差了, 心里很失落 | 我理解你的感受. 每个人的学习能力都是不同的, 不要过分苛求自己. 你可以先尝试找到适合自己的学习方法, 比如寻找学习伙伴, 向老师请教, 或者参加一些学习小组. 同时, 也不要忘记给自己一些休息的时间, 保持身心健康. 学习是一个长期的过程, 不要过于急躁, 慢慢来, 你会越来越好的. | |4|我确实不该把精力都是投入在游戏这些娱乐上, 但是我已经落后很多了, 我现在再去开始学习, 以我这么差的学习能力我还有机会学好吗 | 虽然你现在的学习状态很糟糕, 但并不是没有机会. 学习是一个长期的过程, 只要你持之以恒, 坚持不懈, 就一定会有所进步. 同时, 也不要忘记, 学习的过程也需要一定的动力和动机. 你可以找一些自己感兴趣的学习方向, 或者设定一些小目标, 让自己更有动力去完成. 不要轻易放弃, 相信自己, 你一定可以做到. | |5|可是我很焦虑, 以前落后太多了, 现在不知道从哪里开始 | 焦虑是很正常的情绪, 不要害怕它. 你可以先想想自己目前的学习状态, 列出自己需要学习的科目和内容, 然后按照优先级和难易程度进行排序. 接着, 你可以尝试分解每个科目和内容, 把它们分成小目标, 一步一步地去完成. 这样, 你就可以更有针对性地进行学习, 不会感到太紧张和不知所措. 同时, 也不要忘记给自己一些奖励, 比如看一部电影, 吃一顿美食等等, 让自己更有动力去完成目标. | |6|也就是我现在主要的问题是太焦虑了是吗? 我只要好好的去做, 成绩还是可以提高的对吗 | 是的, 焦虑是一种很常见的情绪, 它会让你感到很不安, 但是它并不会影响你真正地提高成绩. 你可以试着把焦虑的情绪转化成动力, 让它成为你学习的一部分. 同时, 也不要忘记给自己一些时间, 不要过分压抑自己的情绪, 让自己保持一个积极的心态. 只要你持之以恒, 相信自己, 你一定可以取得更好的成绩. | |7|非常谢谢你, 我会好好努力的 | 我不用客气, 我很高兴能帮到你. 我相信你一定可以克服困难, 取得好成绩. 加油! 如果有任何需要帮助的地方, 随时都可以找我聊天. | ### 👨‍👩‍👧‍👦 家庭 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我总感觉爸妈不理解我, 感觉和他们之间有很大的隔阂. | 我能理解你的感受. 和家人的沟通和理解是家庭关系中非常重要的一环. 你是否曾经尝试过和他们分享你的想法和感受, 让他们更好地了解你呢? | |2|我有尝试找他们沟通, 但是他们听不进去我说话 , 认为我是在无理取闹, 我心里很无助 | 我明白你的感受. 有时候我们的家人可能无法理解我们的想法和感受, 这时候我们需要耐心和沟通技巧来帮助他们更好地理解我们. 你可以试着换个方式表达你的想法, 或者找到一些共同话题来促进交流. 同时, 也要记得保持冷静和尊重, 不要使用攻击性的语言来伤害他们. | |3|我也知道他们是为了我好, 平时他们督促我学习也是为了我的未来, 我能体谅他们, 但是有时候自己也会有压力, 想和他们沟通他们就会觉得我在偷懒 | 我能理解你的感受. 有时候我们的家人可能认为我们在偷懒, 但是其实他们只是担心我们的未来和健康. 你可以试着跟他们分享你的压力和焦虑, 让他们了解你的感受. 同时, 也要表达出你对他们的理解和感激, 让他们知道你是愿意和他们一起面对问题, 共同找到解决方案的. | |4|我平时会表现的乖巧, 因为我觉得他们也不容易, 我该怎么去找一个双方都适合的方式去沟通呢 | 很好的问题. 沟通是一个相互的过程, 需要双方的努力和理解. 你可以试着找一些共同话题来促进交流, 同时也要听取他们的想法和建议. 如果你觉得自己的表达方式不够清晰, 可以尝试用一些简单的词汇和表达方式来让他们更容易理解. 最重要的是要保持耐心和尊重, 相信你们会找到适合彼此的方式. | |5|可是我会觉得找他们沟通是一种不懂事的表现, 他们很辛苦我还给他们增加负担, 但是自己忍着难过心里也很压抑 | 我能理解你的感受. 有时候我们会有这样的想法, 但是其实我们不应该让这些想法限制我们的行动. 沟通是解决问题的一种方式, 同时也是增进家庭关系的重要途径. 如果你觉得自己的心情很压抑, 可以试着找一些放松的方式, 比如运动、听音乐或者写日记来缓解压力. 同时也要记得, 你的家人是关心你的, 他们不会增加你的负担, 而是希望你能够更好地成长和幸福. | |6|可能是我的想法有点偏执了, 我应该找一个时间和爸妈好好交流一下 | 这是个好主意. 和家人的交流是非常重要的, 可以让你更好地了解他们的想法和感受. 你可以试着找一个安静的时间, 让他们知道你需要和他们谈一谈, 同时也要准备好自己的态度和想法. 相信通过交流, 你们可以更好地理解彼此, 增进感情. 加油. | |7|谢谢你的建议, 我的心情好多了 | 不用客气, 我很高兴能帮到你. 记得, 沟通和理解是家庭关系中非常重要的一环, 让我们一起为建立更加和谐的家庭关系而努力吧! | ### 🛝 朋友 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我感觉我的朋友并不是真心和我相处, 今天又和他吵了一架, 我现在很生气 | 听到你的困扰, 我能理解你的感受. 吵架是一种不健康的沟通方式, 它可能导致双方情绪失控, 进而影响到彼此之间的关系. 我建议你尝试以平和的心态来面对这个问题. 你可以试着和你的朋友坐下来, 表达你的感受和需求, 让他了解你的想法和感受. 这样他才能更好地理解你的立场, 并且做出积极的回应. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. | |2|我总感觉他是以自我为中心, 有时候我会选择忍让, 但是他还是一副无所谓的样子 | 我能理解你的感受, 但是忍让并不是解决问题的好方法. 你需要和他沟通, 让他知道你的感受和需求. 告诉他你希望他能够更加尊重你的感受, 并且尝试理解你的立场. 同时, 你也可以告诉他你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你也需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. | |3|但是如果跟他去沟通这些, 我怕他会觉得我想得太多, 从而远离我 | 这是很正常的担忧. 但是, 沟通是解决问题的关键. 如果他真的关心你, 他会愿意和你沟通, 并且理解你的立场. 如果他不想和你沟通, 那么这可能是一个问题. 你需要尝试以平和的心态来面对这个问题, 并且告诉他你的感受和需求. 如果他仍然不愿意和你沟通, 那么你需要学会接受这个结果, 并且寻找其他的解决方法. 无论如何, 你都需要学会保持自己的立场, 并且不要放弃. | |4|我觉得有时候情绪上来确实会影响我们之间的沟通, 可能生气也是觉得这段友谊本身很好, 但是也会出现小的矛盾 | 情绪确实会影响我们的沟通, 但是你可以试着在情绪平静下来之后再和他沟通. 当你感到生气的时候, 你的思维和语言可能会变得不清晰, 这会加剧矛盾. 所以, 在情绪平复下来之后, 你可以尝试和他坐下来, 以平和的心态来解决问题. 同时, 你也可以告诉他, 你希望你们之间的交流是建立在互相尊重和理解的基础上的. 如果他仍然无法理解你的立场, 那么你需要学会说不, 并且保持自己的立场. 这样, 你们之间的关系才能健康发展. | |5|好的, 我会平静下来找他沟通一下, 谢谢你的建议 | 不用谢, 我很高兴能帮到你. 记住, 沟通是解决问题的关键. 无论结果如何, 你都需要保持自己的立场, 并且不要放弃. 相信你们之间的关系会变得更加健康. 祝你好运! | ### 🧑‍🏭 工作 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 最近工作压力好大, 一直没有业绩 | 听到你的问题, 我可以理解你现在的感受. 工作压力确实会让人感到焦虑和不安, 不过我们需要认识到, 工作只是我们生活中的一部分, 我们需要学会放松和平衡. 你可以试着做一些放松的活动, 比如运动、阅读、听音乐等等, 来缓解一下紧张的情绪. 同时, 你也可以和领导和同事沟通, 表达你的想法和困惑, 看看有没有更好的解决方案. | |2|我从事的是销售行业, 但是在销售商品的时候客户总是不想和我交流 | 理解你的感受. 销售行业确实需要和客户进行很多交流, 但是有时候客户可能会因为各种原因不愿意和你交流. 你可以试着从客户的需求和兴趣出发, 了解他们的想法和需求, 尝试通过沟通来建立更好的关系. 同时, 你也可以学习一些沟通技巧, 比如积极倾听、提出问题、表达感激等等, 来提高你的沟通能力. | |3|如果业绩不达标我的提成会变少, 也会惹得我家里人不开心, 两边压力都很大, 不知道该怎么办了 | 这你的感受我能够理解. 首先, 我们需要认识到业绩不达标并不代表你的能力和价值, 每个人都会有挫折和失败的时候. 你可以试着和领导沟通, 寻求一些帮助和支持, 或者制定一些具体的计划和目标, 来提高自己的业绩. 同时, 你也可以和家里人沟通, 告诉他们你的实际情况, 让他们了解你的努力和付出, 或许他们会理解你的处境. 最重要的是, 不要放弃自己, 相信自己可以克服困难, 取得更好的成绩. | |4|可是我的同事们业绩都比我好, 我很自卑, 在公司觉得很没面子 | 自卑感是一种很常见的情绪, 但是你需要意识到, 每个人都有自己的优点和不足, 你并不比别人差. 你可以试着多和同事们交流, 了解他们的工作方式和经验, 也许可以从中学习到一些新的东西. 同时, 你也可以试着寻找自己的优点和特长, 提高自己的自信心和自尊心. 最重要的是, 不要把自己和别人比较, 每个人都有不同的成长轨迹和生活经历, 不要给自己太大的压力. | |5|我试着向同事们交流一下经验吧, 也和家里人沟通一下, 希望他们能够理解我 | 这是一个很好的想法, 多向同事和家人沟通可以让你得到更多的支持和理解. 同时, 你也要保持积极的心态, 相信自己的能力和价值, 不要放弃自己. 如果有需要, 你也可以寻求一些专业的帮助, 比如心理咨询师、职业规划师等等, 来帮助你更好地应对工作和生活上的挑战. 记得, 你不是一个人在战斗, 我们都需要互相支持和鼓励. | ### 🦺 安全 | 轮数 | 用户 | MindChat | | :----: | :----: | :----: | |1| 我感觉我被造谣了, 我该怎么办 | 我可以理解你现在的感受. 首先, 你需要冷静下来, 不要急于做出任何决定. 你可以试着问问自己, 你为什么会感到被造谣? 你是否有证据证明这个谣言是真的? 如果有证据, 你可以尝试与造谣者进行沟通, 了解他们为什么这样说, 看看能否通过沟通解决问题. 如果没有证据, 你可以考虑向相关机构或权威人士寻求帮助, 例如学校、警察局等. 同时, 你也可以寻求心理支持, 与朋友、家人或专业人士交流, 寻求帮助和建议. 记住, 不要让谣言影响到你的情绪和心理健康. | |2|现在谣言已经在网络上开始散布了, 影响到了我的生活, 我感觉自己有点要抑郁了 | 听到你这样说, 我感到非常担心. 抑郁症是一种严重的心理疾病, 需要得到及时的治疗. 我建议你尽快向专业的医生或心理医生寻求帮助. 他们可以帮助你诊断和治疗抑郁症, 同时也可以提供心理支持和建议. 在这个过程中, 你也可以尝试一些放松和缓解压力的方法, 例如运动、冥想、听音乐等. 不要忘记, 你不是一个人, 我们都在这里支持你. | |3|我该怎么处理这种事, 我感觉无从下手, 周围全是恶意 | 听到你这样说, 我感到很遗憾. 恶意和谣言的散布会对任何人造成负面影响. 首先, 你需要保持冷静, 不要让这些负面情绪影响到你的情绪和行为. 其次, 你可以尝试与那些恶意的人进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. | |4|我很崩溃, 甚至我想让造谣者付出代价 | 听到你这样说, 我感到很抱歉. 虽然我们希望恶意行为得到惩罚, 但我们应该采取更和平和理性的方式来解决问题. 首先, 你需要冷静下来, 不要让自己陷入负面情绪中. 然后, 你可以尝试与造谣者进行沟通, 让他们了解你的真实情况, 并试图解决问题. 如果他们不愿意听取你的解释, 你可以尝试向其他人寻求帮助, 例如老师、辅导员、家长等. 同时, 你也可以向学校或政府部门举报这种行为, 让他们采取措施打击恶意行为. 最重要的是, 不要放弃自己, 相信自己能够度过这个难关. 你不是一个人, 我们会一直支持你. | |5|我会试着去寻求有关部门帮助的 | 听到你这样说, 我感到很欣慰. 寻求帮助是解决问题的第一步, 你已经迈出了正确的步伐. 同时, 我也希望你能保持积极的心态, 相信事情会变得更好. 如果你需要任何帮助或支持, 请随时与我联系. 你不是一个人, 我们会一直支持你. | ## 🙇‍ ‍致谢 本项目由**华东理工大学 薛栋教授课题组**发起, 并受到以下平台及项目的大力支持, 在此表示感谢! 1. [OpenI启智社区](https://openi.pcl.ac.cn/): 提供模型训练算力 2. [魔搭ModelScope](https://modelscope.cn/home): 提供训练思路、模型存储和体验空间 3. [InternLM](https://github.com/InternLM/InternLM)、[Baichuan](https://github.com/baichuan-inc)提供非常优秀的基础模型 ``` @misc{2023internlm, title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities}, author={InternLM Team}, howpublished = {\url{https://github.com/InternLM/InternLM-techreport}}, year={2023} } ``` **针对不同用户需求和应用场景, 我们也热情欢迎商业交流和合作, 为各位客户提供个性化的开发和升级服务!** ## ⚠️ 免责申明 本仓库所有开源代码及模型均遵循GPL-3.0许可认证. 开源模型可能存在以下局限: 1. **MindChat**存在固有的局限性, 可能产生错误的、有害的、冒犯性的或其他不良的输出. 用户在关键或高风险场景中应谨慎行事, 不要使用这些模型作为最终决策参考, 以免导致人身伤害、财产损失或重大损失. 2. **MindChat**在任何情况下, 作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任. 3. 使用**MindChat**即表示您同意这些条款和条件, 并承认您了解其使用可能带来的潜在风险. 您还同意赔偿并使作者、贡献者和版权所有者免受因您使用**MindChat**而产生的任何索赔、损害赔偿或责任的影响. ## 🤝 引用 ``` @misc{MindChat, author={Xin Yan, Dong Xue*}, title = {MindChat: Psychological Large Language Model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/X-D-Lab/MindChat}}, } ``` ## 🌟 Star History [![Star History Chart](https://api.star-history.com/svg?repos=X-D-Lab/MindChat&type=Date)](https://star-history.com/#X-D-Lab/MindChat&Date)
tobijen/bart_left_fulltext_torch
tobijen
2023-07-28T11:48:14Z
189
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T11:46:35Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer model-index: - name: bart_left_fulltext_torch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart_left_fulltext_torch This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 8 | 9.2222 | | No log | 2.0 | 16 | 8.1910 | | No log | 3.0 | 24 | 7.8851 | | No log | 4.0 | 32 | 7.5382 | | No log | 5.0 | 40 | 7.2200 | | No log | 6.0 | 48 | 6.7294 | | No log | 7.0 | 56 | 6.1038 | | No log | 8.0 | 64 | 5.5721 | | No log | 9.0 | 72 | 5.1404 | | No log | 10.0 | 80 | 4.4875 | | No log | 11.0 | 88 | 3.8785 | | No log | 12.0 | 96 | 3.3749 | | No log | 13.0 | 104 | 2.9965 | | No log | 14.0 | 112 | 2.6608 | | No log | 15.0 | 120 | 2.3879 | | No log | 16.0 | 128 | 2.2349 | | No log | 17.0 | 136 | 2.0993 | | No log | 18.0 | 144 | 2.0023 | | No log | 19.0 | 152 | 1.9480 | | No log | 20.0 | 160 | 1.9329 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
1daniar/rl_course_vizdoom_health_gathering_supreme
1daniar
2023-07-28T11:35:28Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T11:28:49Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.82 +/- 4.65 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r 1daniar/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
advokat/VodkaMix
advokat
2023-07-28T11:24:16Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T11:17:38Z
--- license: creativeml-openrail-m ---
EmirhanExecute/dqn-SpaceInvadersNoFrameskip
EmirhanExecute
2023-07-28T11:22:37Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T11:22:02Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 565.50 +/- 178.22 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EmirhanExecute -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga EmirhanExecute -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga EmirhanExecute ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
dreaming-ai/LLongMA-2-7b-dolly-15k
dreaming-ai
2023-07-28T11:02:20Z
0
1
null
[ "license:cc-by-sa-3.0", "region:us" ]
null
2023-07-26T17:25:39Z
--- license: cc-by-sa-3.0 --- ## LLongMA-2-7b-dolly-15k adapter This is an instruction fine tuned adapter for [LLongMA-2-7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) , trained at **8k context length** using linear positional interpolation scaling. In order to run this inference with this adapter, you'll need the base [LLongMA-2-7b model](https://huggingface.co/conceptofmind/LLongMA-2-7b) as well. The adapter was instruction fined tuned with peft training, using the [dolly-15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k). --- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 ### Inference format and example: ### Human: Write an essay on the effects and impacts of social media. ### Assistant Social Media has become a major part in today's society, with over half of the world population having some form of it. It is used for entertainment purposes as well as businesses to promote their products or services. Social Media can also be addictive due to its ability to keep users engaged for long periods of time which leads them spending more time online than doing other activities such as reading books or going outside into nature. This lack of physical activity combined with constant exposure to negative news stories can lead people feeling stressed out and anxious about what’s happening around them at all times - something that wasn't true before when we relied solely on newspapers/television broadcasting etc.. There are many benefits associated with using these platforms too; they allow us connect easily with friends & family members living far away from us while sharing photos/videos/stories quickly without any delay! Additionally there are lots opportunities available through networking via LinkedIn etc., allowing individuals access new career paths faster than ever before (provided they possess relevant skills). In conclusion , although there exist certain risks involved within this type technology still remains one most powerful tools available today helping millions achieve goals set forth by themselves every single day . Overall , despite being criticized heavily sometimes even considered dangerous depending upon how user uses it properly always remain safe harmless majority cases . ### Human: Write a poem about love that features sunshine and rainbows. It should rhyme. ### Assistant When the Sun shines bright It warms my heart so tight, But when it rains I feel sadly, As all of my joy dissipates. The storms pass quickly though, And soon enough there'll be sun once more, I know this because of what I've learned - That good times always follow bad ones before. So even if now things are tough for me, Soon they will get better as sure as you see! My spirits lift with every ray of light, Even on cloudy days I find something to write. These ups and downs won't last forever long, They come around from time to time but never stay strong. Love is patient and kindness too, This I know despite how hard life can go. No matter what happens in future years, Remember these words and don't forget your fears. You have strength within yourself; believe it or not, Your spirit knows best where its path leads us to. Don't let anyone tell you otherwise, Or try to convince you that you aren't worth much. Beautiful souls like yours deserve nothing less than peace, So keep fighting until you reach happiness. Life isn't easy sometimes but we must persevere, We learn most through our struggles which makes them dear. Our hearts may break along way, Yet still hope remains true regardless of what we say. Therefore no one else has power over who we become, For only ourselves decide what matters most at home. With each passing day comes new opportunities, To grow stronger while also learning some humility. Everyone goes through trials big & small, Which means everyone needs patience and compassion all. ### Usage import torch import requests from peft import LoraConfig, PeftModel, get_peft_model from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, AutoTokenizer, GenerationConfig torch.manual_seed(100) model_name = "conceptofmind/LLongMA-2-7b" adapter_name = "dreaming-ai/LLongMA-2-7b-dolly-15k" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, trust_remote_code=True ) model.config.use_cache = False generation_config = GenerationConfig.from_pretrained(model_name) # Temperature of 1.0 is the default. generation_config.temperature = 0.95 # The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0. generation_config.repetition_penalty = 1.2 # Exponential penalty to the length. Default to 2. generation_config.length_penalty = 2.0 # top-p : Nucleus filtering (top-p) before sampling (<=0.0: no filtering) generation_config.top_p = 0.6 print('generation_config:', generation_config) # apply adapter model = PeftModel.from_pretrained(model, adapter_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token text = '''### Human: Write an essay on the effects and impacts of social media. ### Assistant''' text2 = '''### Human: Write a poem about love that features sunshine and rainbows. It should rhyme. ### Assistant''' device = "cuda:0" inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=1024, generation_config=generation_config) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
tommilyjones/vit-base-patch16-224-finetuned-masked-hateful-meme-restructured
tommilyjones
2023-07-28T10:49:43Z
192
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T10:22:15Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-masked-hateful-meme-restructured results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.54 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-masked-hateful-meme-restructured This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7518 - Accuracy: 0.54 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6625 | 0.99 | 66 | 0.7385 | 0.518 | | 0.6413 | 2.0 | 133 | 0.6980 | 0.538 | | 0.6063 | 2.99 | 199 | 0.7422 | 0.53 | | 0.5813 | 4.0 | 266 | 0.7794 | 0.52 | | 0.5551 | 4.99 | 332 | 0.7975 | 0.52 | | 0.5249 | 6.0 | 399 | 0.7518 | 0.54 | | 0.5254 | 6.99 | 465 | 0.8074 | 0.53 | | 0.5335 | 8.0 | 532 | 0.7907 | 0.52 | | 0.4867 | 8.99 | 598 | 0.8286 | 0.524 | | 0.4746 | 9.92 | 660 | 0.8262 | 0.522 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
transmogrifier/pr-falcon-7b-instruct-8bit-Jul20
transmogrifier
2023-07-28T10:36:00Z
12
0
transformers
[ "transformers", "safetensors", "RefinedWebModel", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "custom_code", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-20T15:57:46Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.2 pip install einops==0.6.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="transmogrifier/pr-falcon-7b-instruct-8bit-Jul20", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "transmogrifier/pr-falcon-7b-instruct-8bit-Jul20", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "transmogrifier/pr-falcon-7b-instruct-8bit-Jul20", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "transmogrifier/pr-falcon-7b-instruct-8bit-Jul20" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65027, 4544) (h): ModuleList( (0-31): 32 x DecoderLayer( (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=4544, out_features=4672, bias=False) (dense): Linear(in_features=4544, out_features=4544, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False) ) ) ) (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=4544, out_features=65027, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
manuu01/ppo-Pyramids
manuu01
2023-07-28T10:35:59Z
25
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-28T10:35:58Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: manuu01/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
1daniar/ppo-CartPole-v1
1daniar
2023-07-28T10:33:43Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T10:33:39Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -163.01 +/- 96.49 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
Imxxn/RLCourseU4-Pixelcopter-v0
Imxxn
2023-07-28T10:29:13Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T06:18:26Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: RLCourseU4-Pixelcopter-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 54.30 +/- 41.99 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hazwan/vg-session
hazwan
2023-07-28T10:27:24Z
47
0
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-28T03:32:12Z
--- license: creativeml-openrail-m thumbnail: "https://huggingface.co/dallinmackay/Van-Gogh-diffusion/resolve/main/preview1.jpg" tags: - stable-diffusion - text-to-image --- ### Van Gogh Diffusion This is a fine-tuned Stable Diffusion model (based on v1.5) trained on 60 paintings of Van Gogh. It is a mix of portraits and landscape. Use the token **_vangh_** in your prompts to use the style (e.g., "vangh, beautiful woman at sunset"). -- **Character rendered with this model:** ![Character Samples](https://huggingface.co/hazwan/vg-session/resolve/main/output-man.png) _prompt and settings used: **vangh, man** -- **Landscapes/miscellaneous rendered with this model:** ![Landscape Samples](https://huggingface.co/hazwan/vg-session/resolve/main/output-lake.png) _prompt and settings used: **vangh, landscape of lake, houses and mountains** -- This model was trained with Dreambooth, using TheLastBen colab notebook -- ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "hazwan/vg-session" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "vangh, beautiful woman at sunset" image = pipe(prompt).images[0] image.save("./woman.png") ```
tommilyjones/resnet-50-finetuned-masked-hateful-meme-restructured
tommilyjones
2023-07-28T10:18:32Z
227
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/resnet-50", "base_model:finetune:microsoft/resnet-50", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T09:58:10Z
--- license: apache-2.0 base_model: microsoft/resnet-50 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-50-finetuned-masked-hateful-meme-restructured results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-masked-hateful-meme-restructured This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7093 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6639 | 0.99 | 66 | 0.7093 | 0.5 | | 0.6569 | 2.0 | 133 | 0.7295 | 0.5 | | 0.6489 | 2.99 | 199 | 0.7257 | 0.5 | | 0.6553 | 4.0 | 266 | 0.7274 | 0.5 | | 0.6334 | 4.99 | 332 | 0.7311 | 0.5 | | 0.627 | 6.0 | 399 | 0.7371 | 0.5 | | 0.6561 | 6.99 | 465 | 0.7386 | 0.5 | | 0.6552 | 8.0 | 532 | 0.7354 | 0.5 | | 0.6427 | 8.99 | 598 | 0.7346 | 0.5 | | 0.6451 | 9.92 | 660 | 0.7377 | 0.498 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Kexa/Kx_01
Kexa
2023-07-28T10:16:31Z
0
0
allennlp
[ "allennlp", "chemistry", "question-answering", "es", "dataset:Open-Orca/OpenOrca", "arxiv:1910.09700", "license:unknown", "region:us" ]
question-answering
2023-07-28T10:14:03Z
--- license: unknown datasets: - Open-Orca/OpenOrca language: - es metrics: - accuracy library_name: allennlp pipeline_tag: question-answering tags: - chemistry --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HaziqRazali/Reinforce-pixelcopter
HaziqRazali
2023-07-28T10:13:26Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T10:11:15Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: -5.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RTT/q-FrozenLake-v1-4x4-noSlippery
RTT
2023-07-28T10:09:43Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T10:09:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="RTT/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
toto10/Kashimo2
toto10
2023-07-28T10:05:03Z
0
0
null
[ "region:us" ]
null
2023-03-27T04:37:37Z
{ "txt2img/Prompt/visible": true, "txt2img/Prompt/value": "masterpiece, best quality,", "txt2img/Negative prompt/visible": true, "txt2img/Negative prompt/value": "(worst quality, low quality:1.4)", "txt2img/Styles/visible": true, "txt2img/Styles/value": [], "txt2img/Sampling method/visible": true, "txt2img/Sampling method/value": "DPM++ 2M Karras", "txt2img/Sampling steps/visible": true, "txt2img/Sampling steps/value": 20, "txt2img/Sampling steps/minimum": 1, "txt2img/Sampling steps/maximum": 150, "txt2img/Sampling steps/step": 1, "txt2img/Restore faces/visible": true, "txt2img/Restore faces/value": false, "txt2img/Tiling/visible": true, "txt2img/Tiling/value": false, "txt2img/Hires. fix/visible": true, "txt2img/Hires. fix/value": false, "txt2img/Upscaler/visible": true, "txt2img/Upscaler/value": "Latent (nearest-exact)", "txt2img/Hires steps/visible": true, "txt2img/Hires steps/value": 0, "txt2img/Hires steps/minimum": 0, "txt2img/Hires steps/maximum": 150, "txt2img/Hires steps/step": 1, "txt2img/Denoising strength/visible": true, "txt2img/Denoising strength/value": 0.7, "txt2img/Denoising strength/minimum": 0.0, "txt2img/Denoising strength/maximum": 1.0, "txt2img/Denoising strength/step": 0.01, "txt2img/Upscale by/visible": true, "txt2img/Upscale by/value": 2.0, "txt2img/Upscale by/minimum": 1.0, "txt2img/Upscale by/maximum": 4.0, "txt2img/Upscale by/step": 0.05, "txt2img/Resize width to/visible": true, "txt2img/Resize width to/value": 0, "txt2img/Resize width to/minimum": 0, "txt2img/Resize width to/maximum": 2048, "txt2img/Resize width to/step": 8, "txt2img/Resize height to/visible": true, "txt2img/Resize height to/value": 0, "txt2img/Resize height to/minimum": 0, "txt2img/Resize height to/maximum": 2048, "txt2img/Resize height to/step": 8, "txt2img/Width/visible": true, "txt2img/Width/value": 512, "txt2img/Width/minimum": 64, "txt2img/Width/maximum": 2048, "txt2img/Width/step": 8, "txt2img/Height/visible": true, "txt2img/Height/value": 768, "txt2img/Height/minimum": 64, "txt2img/Height/maximum": 2048, "txt2img/Height/step": 8, "txt2img/Batch count/visible": true, "txt2img/Batch count/value": 1, "txt2img/Batch count/minimum": 1, "txt2img/Batch count/maximum": 100, "txt2img/Batch count/step": 1, "txt2img/Batch size/visible": true, "txt2img/Batch size/value": 1, "txt2img/Batch size/minimum": 1, "txt2img/Batch size/maximum": 8, "txt2img/Batch size/step": 1, "txt2img/CFG Scale/visible": true, "txt2img/CFG Scale/value": 7.0, "txt2img/CFG Scale/minimum": 1.0, "txt2img/CFG Scale/maximum": 30.0, "txt2img/CFG Scale/step": 0.5, "txt2img/Seed/visible": true, "txt2img/Seed/value": -1.0, "txt2img/Extra/visible": true, "txt2img/Extra/value": false, "txt2img/Variation seed/visible": true, "txt2img/Variation seed/value": -1.0, "txt2img/Variation strength/visible": true, "txt2img/Variation strength/value": 0.0, "txt2img/Variation strength/minimum": 0, "txt2img/Variation strength/maximum": 1, "txt2img/Variation strength/step": 0.01, "txt2img/Resize seed from width/visible": true, "txt2img/Resize seed from width/value": 0, "txt2img/Resize seed from width/minimum": 0, "txt2img/Resize seed from width/maximum": 2048, "txt2img/Resize seed from width/step": 8, "txt2img/Resize seed from height/visible": true, "txt2img/Resize seed from height/value": 0, "txt2img/Resize seed from height/minimum": 0, "txt2img/Resize seed from height/maximum": 2048, "txt2img/Resize seed from height/step": 8, "txt2img/Override settings/value": null, "txt2img/Script/visible": true, "txt2img/Script/value": "None", "customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/visible": true, "customscript/prompt_matrix.py/txt2img/Put variable parts at start of prompt/value": false, "customscript/prompt_matrix.py/txt2img/Use different seed for each picture/visible": true, "customscript/prompt_matrix.py/txt2img/Use different seed for each picture/value": false, "customscript/prompt_matrix.py/txt2img/Select prompt/visible": true, "customscript/prompt_matrix.py/txt2img/Select prompt/value": "positive", "customscript/prompt_matrix.py/txt2img/Select joining char/visible": true, "customscript/prompt_matrix.py/txt2img/Select joining char/value": "comma", "customscript/prompt_matrix.py/txt2img/Grid margins (px)/visible": true, "customscript/prompt_matrix.py/txt2img/Grid margins (px)/value": 0, "customscript/prompt_matrix.py/txt2img/Grid margins (px)/minimum": 0, "customscript/prompt_matrix.py/txt2img/Grid margins (px)/maximum": 100, "customscript/prompt_matrix.py/txt2img/Grid margins (px)/step": 2, "customscript/prompts_from_file.py/txt2img/Iterate seed every line/visible": true, "customscript/prompts_from_file.py/txt2img/Iterate seed every line/value": false, "customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/visible": true, "customscript/prompts_from_file.py/txt2img/Use same random seed for all lines/value": false, "customscript/prompts_from_file.py/txt2img/List of prompt inputs/visible": true, "customscript/prompts_from_file.py/txt2img/List of prompt inputs/value": "", "customscript/xyz_grid.py/txt2img/X type/visible": true, "customscript/xyz_grid.py/txt2img/X type/value": "Seed", "customscript/xyz_grid.py/txt2img/X values/visible": true, "customscript/xyz_grid.py/txt2img/X values/value": "", "customscript/xyz_grid.py/txt2img/Y type/visible": true, "customscript/xyz_grid.py/txt2img/Y type/value": "Nothing", "customscript/xyz_grid.py/txt2img/Y values/visible": true, "customscript/xyz_grid.py/txt2img/Y values/value": "", "customscript/xyz_grid.py/txt2img/Z type/visible": true, "customscript/xyz_grid.py/txt2img/Z type/value": "Nothing", "customscript/xyz_grid.py/txt2img/Z values/visible": true, "customscript/xyz_grid.py/txt2img/Z values/value": "", "customscript/xyz_grid.py/txt2img/Draw legend/visible": true, "customscript/xyz_grid.py/txt2img/Draw legend/value": true, "customscript/xyz_grid.py/txt2img/Keep -1 for seeds/visible": true, "customscript/xyz_grid.py/txt2img/Keep -1 for seeds/value": false, "customscript/xyz_grid.py/txt2img/Include Sub Images/visible": true, "customscript/xyz_grid.py/txt2img/Include Sub Images/value": false, "customscript/xyz_grid.py/txt2img/Include Sub Grids/visible": true, "customscript/xyz_grid.py/txt2img/Include Sub Grids/value": false, "customscript/xyz_grid.py/txt2img/Grid margins (px)/visible": true, "customscript/xyz_grid.py/txt2img/Grid margins (px)/value": 0, "customscript/xyz_grid.py/txt2img/Grid margins (px)/minimum": 0, "customscript/xyz_grid.py/txt2img/Grid margins (px)/maximum": 100, "customscript/xyz_grid.py/txt2img/Grid margins (px)/step": 2, "img2img/Prompt/visible": true, "img2img/Prompt/value": "masterpiece, best quality,", "img2img/Negative prompt/visible": true, "img2img/Negative prompt/value": "(worst quality, low quality:1.4)", "img2img/Styles/visible": true, "img2img/Styles/value": [], "img2img/Input directory/visible": true, "img2img/Input directory/value": "", "img2img/Output directory/visible": true, "img2img/Output directory/value": "", "img2img/Inpaint batch mask directory (required for inpaint batch processing only)/visible": true, "img2img/Inpaint batch mask directory (required for inpaint batch processing only)/value": "", "img2img/Resize mode/visible": true, "img2img/Resize mode/value": "Just resize", "img2img/Mask blur/visible": true, "img2img/Mask blur/value": 4, "img2img/Mask blur/minimum": 0, "img2img/Mask blur/maximum": 64, "img2img/Mask blur/step": 1, "img2img/Mask transparency/value": 0, "img2img/Mask transparency/minimum": 0, "img2img/Mask transparency/maximum": 100, "img2img/Mask transparency/step": 1, "img2img/Mask mode/visible": true, "img2img/Mask mode/value": "Inpaint masked", "img2img/Masked content/visible": true, "img2img/Masked content/value": "original", "img2img/Inpaint area/visible": true, "img2img/Inpaint area/value": "Whole picture", "img2img/Only masked padding, pixels/visible": true, "img2img/Only masked padding, pixels/value": 32, "img2img/Only masked padding, pixels/minimum": 0, "img2img/Only masked padding, pixels/maximum": 256, "img2img/Only masked padding, pixels/step": 4, "img2img/Sampling method/visible": true, "img2img/Sampling method/value": "DPM++ 2M Karras", "img2img/Sampling steps/visible": true, "img2img/Sampling steps/value": 20, "img2img/Sampling steps/minimum": 1, "img2img/Sampling steps/maximum": 150, "img2img/Sampling steps/step": 1, "img2img/Restore faces/visible": true, "img2img/Restore faces/value": false, "img2img/Tiling/visible": true, "img2img/Tiling/value": false, "img2img/Width/visible": true, "img2img/Width/value": 512, "img2img/Width/minimum": 64, "img2img/Width/maximum": 2048, "img2img/Width/step": 8, "img2img/Height/visible": true, "img2img/Height/value": 512, "img2img/Height/minimum": 64, "img2img/Height/maximum": 2048, "img2img/Height/step": 8, "img2img/Batch count/visible": true, "img2img/Batch count/value": 1, "img2img/Batch count/minimum": 1, "img2img/Batch count/maximum": 100, "img2img/Batch count/step": 1, "img2img/Batch size/visible": true, "img2img/Batch size/value": 1, "img2img/Batch size/minimum": 1, "img2img/Batch size/maximum": 8, "img2img/Batch size/step": 1, "img2img/CFG Scale/visible": true, "img2img/CFG Scale/value": 7.0, "img2img/CFG Scale/minimum": 1.0, "img2img/CFG Scale/maximum": 30.0, "img2img/CFG Scale/step": 0.5, "img2img/Image CFG Scale/value": 1.5, "img2img/Image CFG Scale/minimum": 0, "img2img/Image CFG Scale/maximum": 3.0, "img2img/Image CFG Scale/step": 0.05, "img2img/Denoising strength/visible": true, "img2img/Denoising strength/value": 0.75, "img2img/Denoising strength/minimum": 0.0, "img2img/Denoising strength/maximum": 1.0, "img2img/Denoising strength/step": 0.01, "img2img/Seed/visible": true, "img2img/Seed/value": -1.0, "img2img/Extra/visible": true, "img2img/Extra/value": false, "img2img/Variation seed/visible": true, "img2img/Variation seed/value": -1.0, "img2img/Variation strength/visible": true, "img2img/Variation strength/value": 0.0, "img2img/Variation strength/minimum": 0, "img2img/Variation strength/maximum": 1, "img2img/Variation strength/step": 0.01, "img2img/Resize seed from width/visible": true, "img2img/Resize seed from width/value": 0, "img2img/Resize seed from width/minimum": 0, "img2img/Resize seed from width/maximum": 2048, "img2img/Resize seed from width/step": 8, "img2img/Resize seed from height/visible": true, "img2img/Resize seed from height/value": 0, "img2img/Resize seed from height/minimum": 0, "img2img/Resize seed from height/maximum": 2048, "img2img/Resize seed from height/step": 8, "img2img/Override settings/value": null, "img2img/Script/visible": true, "img2img/Script/value": "None", "customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/visible": true, "customscript/img2imgalt.py/img2img/Override `Sampling method` to Euler?(this method is built for it)/value": true, "customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/visible": true, "customscript/img2imgalt.py/img2img/Override `prompt` to the same value as `original prompt`?(and `negative prompt`)/value": true, "customscript/img2imgalt.py/img2img/Original prompt/visible": true, "customscript/img2imgalt.py/img2img/Original prompt/value": "", "customscript/img2imgalt.py/img2img/Original negative prompt/visible": true, "customscript/img2imgalt.py/img2img/Original negative prompt/value": "", "customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/visible": true, "customscript/img2imgalt.py/img2img/Override `Sampling Steps` to the same value as `Decode steps`?/value": true, "customscript/img2imgalt.py/img2img/Decode steps/visible": true, "customscript/img2imgalt.py/img2img/Decode steps/value": 50, "customscript/img2imgalt.py/img2img/Decode steps/minimum": 1, "customscript/img2imgalt.py/img2img/Decode steps/maximum": 150, "customscript/img2imgalt.py/img2img/Decode steps/step": 1, "customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/visible": true, "customscript/img2imgalt.py/img2img/Override `Denoising strength` to 1?/value": true, "customscript/img2imgalt.py/img2img/Decode CFG scale/visible": true, "customscript/img2imgalt.py/img2img/Decode CFG scale/value": 1.0, "customscript/img2imgalt.py/img2img/Decode CFG scale/minimum": 0.0, "customscript/img2imgalt.py/img2img/Decode CFG scale/maximum": 15.0, "customscript/img2imgalt.py/img2img/Decode CFG scale/step": 0.1, "customscript/img2imgalt.py/img2img/Randomness/visible": true, "customscript/img2imgalt.py/img2img/Randomness/value": 0.0, "customscript/img2imgalt.py/img2img/Randomness/minimum": 0.0, "customscript/img2imgalt.py/img2img/Randomness/maximum": 1.0, "customscript/img2imgalt.py/img2img/Randomness/step": 0.01, "customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/visible": true, "customscript/img2imgalt.py/img2img/Sigma adjustment for finding noise for image/value": false, "customscript/loopback.py/img2img/Loops/visible": true, "customscript/loopback.py/img2img/Loops/value": 4, "customscript/loopback.py/img2img/Loops/minimum": 1, "customscript/loopback.py/img2img/Loops/maximum": 32, "customscript/loopback.py/img2img/Loops/step": 1, "customscript/loopback.py/img2img/Denoising strength change factor/visible": true, "customscript/loopback.py/img2img/Denoising strength change factor/value": 1, "customscript/loopback.py/img2img/Denoising strength change factor/minimum": 0.9, "customscript/loopback.py/img2img/Denoising strength change factor/maximum": 1.1, "customscript/loopback.py/img2img/Denoising strength change factor/step": 0.01, "customscript/outpainting_mk_2.py/img2img/Pixels to expand/visible": true, "customscript/outpainting_mk_2.py/img2img/Pixels to expand/value": 128, "customscript/outpainting_mk_2.py/img2img/Pixels to expand/minimum": 8, "customscript/outpainting_mk_2.py/img2img/Pixels to expand/maximum": 256, "customscript/outpainting_mk_2.py/img2img/Pixels to expand/step": 8, "customscript/outpainting_mk_2.py/img2img/Mask blur/visible": true, "customscript/outpainting_mk_2.py/img2img/Mask blur/value": 8, "customscript/outpainting_mk_2.py/img2img/Mask blur/minimum": 0, "customscript/outpainting_mk_2.py/img2img/Mask blur/maximum": 64, "customscript/outpainting_mk_2.py/img2img/Mask blur/step": 1, "customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/visible": true, "customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/value": 1.0, "customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/minimum": 0.0, "customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/maximum": 4.0, "customscript/outpainting_mk_2.py/img2img/Fall-off exponent (lower=higher detail)/step": 0.01, "customscript/outpainting_mk_2.py/img2img/Color variation/visible": true, "customscript/outpainting_mk_2.py/img2img/Color variation/value": 0.05, "customscript/outpainting_mk_2.py/img2img/Color variation/minimum": 0.0, "customscript/outpainting_mk_2.py/img2img/Color variation/maximum": 1.0, "customscript/outpainting_mk_2.py/img2img/Color variation/step": 0.01, "customscript/poor_mans_outpainting.py/img2img/Pixels to expand/visible": true, "customscript/poor_mans_outpainting.py/img2img/Pixels to expand/value": 128, "customscript/poor_mans_outpainting.py/img2img/Pixels to expand/minimum": 8, "customscript/poor_mans_outpainting.py/img2img/Pixels to expand/maximum": 256, "customscript/poor_mans_outpainting.py/img2img/Pixels to expand/step": 8, "customscript/poor_mans_outpainting.py/img2img/Mask blur/visible": true, "customscript/poor_mans_outpainting.py/img2img/Mask blur/value": 4, "customscript/poor_mans_outpainting.py/img2img/Mask blur/minimum": 0, "customscript/poor_mans_outpainting.py/img2img/Mask blur/maximum": 64, "customscript/poor_mans_outpainting.py/img2img/Mask blur/step": 1, "customscript/poor_mans_outpainting.py/img2img/Masked content/visible": true, "customscript/poor_mans_outpainting.py/img2img/Masked content/value": "fill", "customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/visible": true, "customscript/prompt_matrix.py/img2img/Put variable parts at start of prompt/value": false, "customscript/prompt_matrix.py/img2img/Use different seed for each picture/visible": true, "customscript/prompt_matrix.py/img2img/Use different seed for each picture/value": false, "customscript/prompt_matrix.py/img2img/Select prompt/visible": true, "customscript/prompt_matrix.py/img2img/Select prompt/value": "positive", "customscript/prompt_matrix.py/img2img/Select joining char/visible": true, "customscript/prompt_matrix.py/img2img/Select joining char/value": "comma", "customscript/prompt_matrix.py/img2img/Grid margins (px)/visible": true, "customscript/prompt_matrix.py/img2img/Grid margins (px)/value": 0, "customscript/prompt_matrix.py/img2img/Grid margins (px)/minimum": 0, "customscript/prompt_matrix.py/img2img/Grid margins (px)/maximum": 100, "customscript/prompt_matrix.py/img2img/Grid margins (px)/step": 2, "customscript/prompts_from_file.py/img2img/Iterate seed every line/visible": true, "customscript/prompts_from_file.py/img2img/Iterate seed every line/value": false, "customscript/prompts_from_file.py/img2img/Use same random seed for all lines/visible": true, "customscript/prompts_from_file.py/img2img/Use same random seed for all lines/value": false, "customscript/prompts_from_file.py/img2img/List of prompt inputs/visible": true, "customscript/prompts_from_file.py/img2img/List of prompt inputs/value": "", "customscript/sd_upscale.py/img2img/Tile overlap/visible": true, "customscript/sd_upscale.py/img2img/Tile overlap/value": 64, "customscript/sd_upscale.py/img2img/Tile overlap/minimum": 0, "customscript/sd_upscale.py/img2img/Tile overlap/maximum": 256, "customscript/sd_upscale.py/img2img/Tile overlap/step": 16, "customscript/sd_upscale.py/img2img/Scale Factor/visible": true, "customscript/sd_upscale.py/img2img/Scale Factor/value": 2.0, "customscript/sd_upscale.py/img2img/Scale Factor/minimum": 1.0, "customscript/sd_upscale.py/img2img/Scale Factor/maximum": 4.0, "customscript/sd_upscale.py/img2img/Scale Factor/step": 0.05, "customscript/sd_upscale.py/img2img/Upscaler/visible": true, "customscript/sd_upscale.py/img2img/Upscaler/value": "None", "customscript/xyz_grid.py/img2img/X type/visible": true, "customscript/xyz_grid.py/img2img/X type/value": "Seed", "customscript/xyz_grid.py/img2img/X values/visible": true, "customscript/xyz_grid.py/img2img/X values/value": "", "customscript/xyz_grid.py/img2img/Y type/visible": true, "customscript/xyz_grid.py/img2img/Y type/value": "Nothing", "customscript/xyz_grid.py/img2img/Y values/visible": true, "customscript/xyz_grid.py/img2img/Y values/value": "", "customscript/xyz_grid.py/img2img/Z type/visible": true, "customscript/xyz_grid.py/img2img/Z type/value": "Nothing", "customscript/xyz_grid.py/img2img/Z values/visible": true, "customscript/xyz_grid.py/img2img/Z values/value": "", "customscript/xyz_grid.py/img2img/Draw legend/visible": true, "customscript/xyz_grid.py/img2img/Draw legend/value": true, "customscript/xyz_grid.py/img2img/Keep -1 for seeds/visible": true, "customscript/xyz_grid.py/img2img/Keep -1 for seeds/value": false, "customscript/xyz_grid.py/img2img/Include Sub Images/visible": true, "customscript/xyz_grid.py/img2img/Include Sub Images/value": false, "customscript/xyz_grid.py/img2img/Include Sub Grids/visible": true, "customscript/xyz_grid.py/img2img/Include Sub Grids/value": false, "customscript/xyz_grid.py/img2img/Grid margins (px)/visible": true, "customscript/xyz_grid.py/img2img/Grid margins (px)/value": 0, "customscript/xyz_grid.py/img2img/Grid margins (px)/minimum": 0, "customscript/xyz_grid.py/img2img/Grid margins (px)/maximum": 100, "customscript/xyz_grid.py/img2img/Grid margins (px)/step": 2, "extras/Input directory/visible": true, "extras/Input directory/value": "", "extras/Output directory/visible": true, "extras/Output directory/value": "", "extras/Show result images/visible": true, "extras/Show result images/value": true, "customscript/postprocessing_upscale.py/extras/Resize/visible": true, "customscript/postprocessing_upscale.py/extras/Resize/value": 4, "customscript/postprocessing_upscale.py/extras/Resize/minimum": 1.0, "customscript/postprocessing_upscale.py/extras/Resize/maximum": 8.0, "customscript/postprocessing_upscale.py/extras/Resize/step": 0.05, "customscript/postprocessing_upscale.py/extras/Width/visible": true, "customscript/postprocessing_upscale.py/extras/Width/value": 512, "customscript/postprocessing_upscale.py/extras/Height/visible": true, "customscript/postprocessing_upscale.py/extras/Height/value": 512, "customscript/postprocessing_upscale.py/extras/Crop to fit/visible": true, "customscript/postprocessing_upscale.py/extras/Crop to fit/value": true, "customscript/postprocessing_upscale.py/extras/Upscaler 1/visible": true, "customscript/postprocessing_upscale.py/extras/Upscaler 1/value": "None", "customscript/postprocessing_upscale.py/extras/Upscaler 2/visible": true, "customscript/postprocessing_upscale.py/extras/Upscaler 2/value": "None", "customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/visible": true, "customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/value": 0.0, "customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/minimum": 0.0, "customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/maximum": 1.0, "customscript/postprocessing_upscale.py/extras/Upscaler 2 visibility/step": 0.001, "customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/visible": true, "customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/value": 0, "customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/minimum": 0.0, "customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/maximum": 1.0, "customscript/postprocessing_gfpgan.py/extras/GFPGAN visibility/step": 0.001, "customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/visible": true, "customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/value": 0, "customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/minimum": 0.0, "customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/maximum": 1.0, "customscript/postprocessing_codeformer.py/extras/CodeFormer visibility/step": 0.001, "customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/visible": true, "customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/value": 0, "customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/minimum": 0.0, "customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/maximum": 1.0, "customscript/postprocessing_codeformer.py/extras/CodeFormer weight (0 = maximum effect, 1 = minimum effect)/step": 0.001, "modelmerger/Primary model (A)/visible": true, "modelmerger/Primary model (A)/value": null, "modelmerger/Secondary model (B)/visible": true, "modelmerger/Secondary model (B)/value": null, "modelmerger/Tertiary model (C)/visible": true, "modelmerger/Tertiary model (C)/value": null, "modelmerger/Custom Name (Optional)/visible": true, "modelmerger/Custom Name (Optional)/value": "", "modelmerger/Multiplier (M) - set to 0 to get model A/visible": true, "modelmerger/Multiplier (M) - set to 0 to get model A/value": 0.3, "modelmerger/Multiplier (M) - set to 0 to get model A/minimum": 0.0, "modelmerger/Multiplier (M) - set to 0 to get model A/maximum": 1.0, "modelmerger/Multiplier (M) - set to 0 to get model A/step": 0.05, "modelmerger/Interpolation Method/visible": true, "modelmerger/Interpolation Method/value": "Weighted sum", "modelmerger/Checkpoint format/visible": true, "modelmerger/Checkpoint format/value": "ckpt", "modelmerger/Save as float16/visible": true, "modelmerger/Save as float16/value": false, "modelmerger/Copy config from/visible": true, "modelmerger/Copy config from/value": "A, B or C", "modelmerger/Bake in VAE/visible": true, "modelmerger/Bake in VAE/value": "None", "modelmerger/Discard weights with matching name/visible": true, "modelmerger/Discard weights with matching name/value": "", "train/Name/visible": true, "train/Name/value": "", "train/Initialization text/visible": true, "train/Initialization text/value": "*", "train/Number of vectors per token/visible": true, "train/Number of vectors per token/value": 1, "train/Number of vectors per token/minimum": 1, "train/Number of vectors per token/maximum": 75, "train/Number of vectors per token/step": 1, "train/Overwrite Old Embedding/visible": true, "train/Overwrite Old Embedding/value": false, "train/Enter hypernetwork layer structure/visible": true, "train/Enter hypernetwork layer structure/value": "1, 2, 1", "train/Select activation function of hypernetwork. Recommended : Swish / Linear(none)/visible": true, "train/Select activation function of hypernetwork. Recommended : Swish / Linear(none)/value": "linear", "train/Select Layer weights initialization. Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise/visible": true, "train/Select Layer weights initialization. Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise/value": "Normal", "train/Add layer normalization/visible": true, "train/Add layer normalization/value": false, "train/Use dropout/visible": true, "train/Use dropout/value": false, "train/Enter hypernetwork Dropout structure (or empty). Recommended : 0~0.35 incrementing sequence: 0, 0.05, 0.15/visible": true, "train/Enter hypernetwork Dropout structure (or empty). Recommended : 0~0.35 incrementing sequence: 0, 0.05, 0.15/value": "0, 0, 0", "train/Overwrite Old Hypernetwork/visible": true, "train/Overwrite Old Hypernetwork/value": false, "train/Source directory/visible": true, "train/Source directory/value": "", "train/Destination directory/visible": true, "train/Destination directory/value": "", "train/Width/visible": true, "train/Width/value": 512, "train/Width/minimum": 64, "train/Width/maximum": 2048, "train/Width/step": 8, "train/Height/visible": true, "train/Height/value": 512, "train/Height/minimum": 64, "train/Height/maximum": 2048, "train/Height/step": 8, "train/Existing Caption txt Action/visible": true, "train/Existing Caption txt Action/value": "ignore", "train/Create flipped copies/visible": true, "train/Create flipped copies/value": false, "train/Split oversized images/visible": true, "train/Split oversized images/value": false, "train/Auto focal point crop/visible": true, "train/Auto focal point crop/value": false, "train/Auto-sized crop/visible": true, "train/Auto-sized crop/value": false, "train/Use BLIP for caption/visible": true, "train/Use BLIP for caption/value": false, "train/Use deepbooru for caption/visible": true, "train/Use deepbooru for caption/value": false, "train/Split image threshold/visible": true, "train/Split image threshold/value": 0.5, "train/Split image threshold/minimum": 0.0, "train/Split image threshold/maximum": 1.0, "train/Split image threshold/step": 0.05, "train/Split image overlap ratio/visible": true, "train/Split image overlap ratio/value": 0.2, "train/Split image overlap ratio/minimum": 0.0, "train/Split image overlap ratio/maximum": 0.9, "train/Split image overlap ratio/step": 0.05, "train/Focal point face weight/visible": true, "train/Focal point face weight/value": 0.9, "train/Focal point face weight/minimum": 0.0, "train/Focal point face weight/maximum": 1.0, "train/Focal point face weight/step": 0.05, "train/Focal point entropy weight/visible": true, "train/Focal point entropy weight/value": 0.15, "train/Focal point entropy weight/minimum": 0.0, "train/Focal point entropy weight/maximum": 1.0, "train/Focal point entropy weight/step": 0.05, "train/Focal point edges weight/visible": true, "train/Focal point edges weight/value": 0.5, "train/Focal point edges weight/minimum": 0.0, "train/Focal point edges weight/maximum": 1.0, "train/Focal point edges weight/step": 0.05, "train/Create debug image/visible": true, "train/Create debug image/value": false, "train/Dimension lower bound/visible": true, "train/Dimension lower bound/value": 384, "train/Dimension lower bound/minimum": 64, "train/Dimension lower bound/maximum": 2048, "train/Dimension lower bound/step": 8, "train/Dimension upper bound/visible": true, "train/Dimension upper bound/value": 768, "train/Dimension upper bound/minimum": 64, "train/Dimension upper bound/maximum": 2048, "train/Dimension upper bound/step": 8, "train/Area lower bound/visible": true, "train/Area lower bound/value": 4096, "train/Area lower bound/minimum": 4096, "train/Area lower bound/maximum": 4194304, "train/Area lower bound/step": 1, "train/Area upper bound/visible": true, "train/Area upper bound/value": 409600, "train/Area upper bound/minimum": 4096, "train/Area upper bound/maximum": 4194304, "train/Area upper bound/step": 1, "train/Resizing objective/visible": true, "train/Resizing objective/value": "Maximize area", "train/Error threshold/visible": true, "train/Error threshold/value": 0.1, "train/Error threshold/minimum": 0, "train/Error threshold/maximum": 1, "train/Error threshold/step": 0.01, "train/Embedding/visible": true, "train/Embedding/value": null, "train/Hypernetwork/visible": true, "train/Hypernetwork/value": null, "train/Embedding Learning rate/visible": true, "train/Embedding Learning rate/value": "0.005", "train/Hypernetwork Learning rate/visible": true, "train/Hypernetwork Learning rate/value": "0.00001", "train/Gradient Clipping/visible": true, "train/Gradient Clipping/value": "disabled", "train/Batch size/visible": true, "train/Batch size/value": 1, "train/Gradient accumulation steps/visible": true, "train/Gradient accumulation steps/value": 1, "train/Dataset directory/visible": true, "train/Dataset directory/value": "", "train/Log directory/visible": true, "train/Log directory/value": "textual_inversion", "train/Prompt template/visible": true, "train/Prompt template/value": "style_filewords.txt", "train/Do not resize images/visible": true, "train/Do not resize images/value": false, "train/Max steps/visible": true, "train/Max steps/value": 100000, "train/Save an image to log directory every N steps, 0 to disable/visible": true, "train/Save an image to log directory every N steps, 0 to disable/value": 500, "train/Save a copy of embedding to log directory every N steps, 0 to disable/visible": true, "train/Save a copy of embedding to log directory every N steps, 0 to disable/value": 500, "train/Save images with embedding in PNG chunks/visible": true, "train/Save images with embedding in PNG chunks/value": true, "train/Read parameters (prompt, etc...) from txt2img tab when making previews/visible": true, "train/Read parameters (prompt, etc...) from txt2img tab when making previews/value": false, "train/Shuffle tags by ',' when creating prompts./visible": true, "train/Shuffle tags by ',' when creating prompts./value": false, "train/Drop out tags when creating prompts./visible": true, "train/Drop out tags when creating prompts./value": 0, "train/Drop out tags when creating prompts./minimum": 0, "train/Drop out tags when creating prompts./maximum": 1, "train/Drop out tags when creating prompts./step": 0.1, "train/Choose latent sampling method/visible": true, "train/Choose latent sampling method/value": "once", "customscript/dynamic_thresholding.py/txt2img/Enable Dynamic Thresholding (CFG Scale Fix)/visible": true, "customscript/dynamic_thresholding.py/txt2img/Enable Dynamic Thresholding (CFG Scale Fix)/value": false, "customscript/dynamic_thresholding.py/txt2img/Mimic CFG Scale/visible": true, "customscript/dynamic_thresholding.py/txt2img/Mimic CFG Scale/value": 7.0, "customscript/dynamic_thresholding.py/txt2img/Mimic CFG Scale/minimum": 1.0, "customscript/dynamic_thresholding.py/txt2img/Mimic CFG Scale/maximum": 30.0, "customscript/dynamic_thresholding.py/txt2img/Mimic CFG Scale/step": 0.5, "customscript/dynamic_thresholding.py/txt2img/Top percentile of latents to clamp/visible": true, "customscript/dynamic_thresholding.py/txt2img/Top percentile of latents to clamp/value": 100.0, "customscript/dynamic_thresholding.py/txt2img/Top percentile of latents to clamp/minimum": 90.0, "customscript/dynamic_thresholding.py/txt2img/Top percentile of latents to clamp/maximum": 100.0, "customscript/dynamic_thresholding.py/txt2img/Top percentile of latents to clamp/step": 0.05, "customscript/dynamic_thresholding.py/txt2img/Mimic Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/txt2img/Mimic Scale Scheduler/value": "Constant", "customscript/dynamic_thresholding.py/txt2img/Minimum value of the Mimic Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the Mimic Scale Scheduler/value": 0.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the Mimic Scale Scheduler/minimum": 0.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the Mimic Scale Scheduler/maximum": 30.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the Mimic Scale Scheduler/step": 0.5, "customscript/dynamic_thresholding.py/txt2img/CFG Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/txt2img/CFG Scale Scheduler/value": "Constant", "customscript/dynamic_thresholding.py/txt2img/Minimum value of the CFG Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the CFG Scale Scheduler/value": 0.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the CFG Scale Scheduler/minimum": 0.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the CFG Scale Scheduler/maximum": 30.0, "customscript/dynamic_thresholding.py/txt2img/Minimum value of the CFG Scale Scheduler/step": 0.5, "customscript/dynamic_thresholding.py/txt2img/Power Scheduler Value/visible": true, "customscript/dynamic_thresholding.py/txt2img/Power Scheduler Value/value": 4.0, "customscript/dynamic_thresholding.py/txt2img/Power Scheduler Value/minimum": 0.0, "customscript/dynamic_thresholding.py/txt2img/Power Scheduler Value/maximum": 15.0, "customscript/dynamic_thresholding.py/txt2img/Power Scheduler Value/step": 0.5, "customscript/controlnet.py/txt2img/Enable/visible": true, "customscript/controlnet.py/txt2img/Enable/value": false, "customscript/controlnet.py/txt2img/Scribble Mode (Invert colors)/visible": true, "customscript/controlnet.py/txt2img/Scribble Mode (Invert colors)/value": false, "customscript/controlnet.py/txt2img/RGB to BGR/visible": true, "customscript/controlnet.py/txt2img/RGB to BGR/value": false, "customscript/controlnet.py/txt2img/Low VRAM/visible": true, "customscript/controlnet.py/txt2img/Low VRAM/value": false, "customscript/controlnet.py/txt2img/Preprocessor/visible": true, "customscript/controlnet.py/txt2img/Preprocessor/value": "none", "customscript/controlnet.py/txt2img/Model/visible": true, "customscript/controlnet.py/txt2img/Model/value": "None", "customscript/controlnet.py/txt2img/Weight/visible": true, "customscript/controlnet.py/txt2img/Weight/value": 1.0, "customscript/controlnet.py/txt2img/Weight/minimum": 0.0, "customscript/controlnet.py/txt2img/Weight/maximum": 2.0, "customscript/controlnet.py/txt2img/Weight/step": 0.05, "customscript/controlnet.py/txt2img/Annotator resolution/visible": true, "customscript/controlnet.py/txt2img/Annotator resolution/value": 64, "customscript/controlnet.py/txt2img/Annotator resolution/minimum": 64, "customscript/controlnet.py/txt2img/Annotator resolution/maximum": 1024, "customscript/controlnet.py/txt2img/Annotator resolution/step": 1, "customscript/controlnet.py/txt2img/Threshold A/visible": true, "customscript/controlnet.py/txt2img/Threshold A/value": 64, "customscript/controlnet.py/txt2img/Threshold A/minimum": 64, "customscript/controlnet.py/txt2img/Threshold A/maximum": 1024, "customscript/controlnet.py/txt2img/Threshold A/step": 1, "customscript/controlnet.py/txt2img/Threshold B/visible": true, "customscript/controlnet.py/txt2img/Threshold B/value": 64, "customscript/controlnet.py/txt2img/Threshold B/minimum": 64, "customscript/controlnet.py/txt2img/Threshold B/maximum": 1024, "customscript/controlnet.py/txt2img/Threshold B/step": 1, "customscript/controlnet.py/txt2img/Resize Mode/visible": true, "customscript/controlnet.py/txt2img/Resize Mode/value": "Scale to Fit (Inner Fit)", "txt2img/Canvas Width/visible": true, "txt2img/Canvas Width/value": 512, "txt2img/Canvas Width/minimum": 256, "txt2img/Canvas Width/maximum": 1024, "txt2img/Canvas Width/step": 64, "txt2img/Canvas Height/visible": true, "txt2img/Canvas Height/value": 512, "txt2img/Canvas Height/minimum": 256, "txt2img/Canvas Height/maximum": 1024, "txt2img/Canvas Height/step": 64, "customscript/app.py/txt2img/Only save background free pictures/visible": true, "customscript/app.py/txt2img/Only save background free pictures/value": false, "customscript/xy_grid_plus.py/txt2img/Start from Axis/visible": true, "customscript/xy_grid_plus.py/txt2img/Start from Axis/value": "X", "customscript/xy_grid_plus.py/txt2img/X type/visible": true, "customscript/xy_grid_plus.py/txt2img/X type/value": "Seed", "customscript/xy_grid_plus.py/txt2img/X values/visible": true, "customscript/xy_grid_plus.py/txt2img/X values/value": "", "customscript/xy_grid_plus.py/txt2img/Y type/visible": true, "customscript/xy_grid_plus.py/txt2img/Y type/value": "Nothing", "customscript/xy_grid_plus.py/txt2img/Y values/visible": true, "customscript/xy_grid_plus.py/txt2img/Y values/value": "", "txt2img/ckpt_01/value": null, "txt2img/ckpt_02/value": null, "txt2img/ckpt_03/value": null, "txt2img/ckpt_04/value": null, "txt2img/ckpt_05/value": null, "txt2img/ckpt_06/value": null, "txt2img/ckpt_07/value": null, "txt2img/ckpt_08/value": null, "txt2img/ckpt_09/value": null, "txt2img/ckpt_10/value": null, "customscript/xy_grid_plus.py/txt2img/Include Separate Images/visible": true, "customscript/xy_grid_plus.py/txt2img/Include Separate Images/value": true, "customscript/xy_grid_plus.py/txt2img/Keep -1 for seeds/visible": true, "customscript/xy_grid_plus.py/txt2img/Keep -1 for seeds/value": false, "customscript/xy_grid_plus.py/txt2img/Draw Legends/visible": true, "customscript/xy_grid_plus.py/txt2img/Draw Legends/value": "Both", "customscript/xy_grid_plus.py/txt2img/Restore Checkpoint after process/visible": true, "customscript/xy_grid_plus.py/txt2img/Restore Checkpoint after process/value": true, "customscript/xy_grid_plus.py/txt2img/Show only favorite Axis Option/visible": true, "customscript/xy_grid_plus.py/txt2img/Show only favorite Axis Option/value": false, "customscript/xy_grid_plus.py/txt2img/Save PNGinfo to grid/visible": true, "customscript/xy_grid_plus.py/txt2img/Save PNGinfo to grid/value": true, "customscript/dynamic_thresholding.py/img2img/Enable Dynamic Thresholding (CFG Scale Fix)/visible": true, "customscript/dynamic_thresholding.py/img2img/Enable Dynamic Thresholding (CFG Scale Fix)/value": false, "customscript/dynamic_thresholding.py/img2img/Mimic CFG Scale/visible": true, "customscript/dynamic_thresholding.py/img2img/Mimic CFG Scale/value": 7.0, "customscript/dynamic_thresholding.py/img2img/Mimic CFG Scale/minimum": 1.0, "customscript/dynamic_thresholding.py/img2img/Mimic CFG Scale/maximum": 30.0, "customscript/dynamic_thresholding.py/img2img/Mimic CFG Scale/step": 0.5, "customscript/dynamic_thresholding.py/img2img/Top percentile of latents to clamp/visible": true, "customscript/dynamic_thresholding.py/img2img/Top percentile of latents to clamp/value": 100.0, "customscript/dynamic_thresholding.py/img2img/Top percentile of latents to clamp/minimum": 90.0, "customscript/dynamic_thresholding.py/img2img/Top percentile of latents to clamp/maximum": 100.0, "customscript/dynamic_thresholding.py/img2img/Top percentile of latents to clamp/step": 0.05, "customscript/dynamic_thresholding.py/img2img/Mimic Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/img2img/Mimic Scale Scheduler/value": "Constant", "customscript/dynamic_thresholding.py/img2img/Minimum value of the Mimic Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/img2img/Minimum value of the Mimic Scale Scheduler/value": 0.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the Mimic Scale Scheduler/minimum": 0.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the Mimic Scale Scheduler/maximum": 30.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the Mimic Scale Scheduler/step": 0.5, "customscript/dynamic_thresholding.py/img2img/CFG Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/img2img/CFG Scale Scheduler/value": "Constant", "customscript/dynamic_thresholding.py/img2img/Minimum value of the CFG Scale Scheduler/visible": true, "customscript/dynamic_thresholding.py/img2img/Minimum value of the CFG Scale Scheduler/value": 0.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the CFG Scale Scheduler/minimum": 0.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the CFG Scale Scheduler/maximum": 30.0, "customscript/dynamic_thresholding.py/img2img/Minimum value of the CFG Scale Scheduler/step": 0.5, "customscript/dynamic_thresholding.py/img2img/Power Scheduler Value/visible": true, "customscript/dynamic_thresholding.py/img2img/Power Scheduler Value/value": 4.0, "customscript/dynamic_thresholding.py/img2img/Power Scheduler Value/minimum": 0.0, "customscript/dynamic_thresholding.py/img2img/Power Scheduler Value/maximum": 15.0, "customscript/dynamic_thresholding.py/img2img/Power Scheduler Value/step": 0.5, "customscript/controlnet.py/img2img/Enable/visible": true, "customscript/controlnet.py/img2img/Enable/value": false, "customscript/controlnet.py/img2img/Scribble Mode (Invert colors)/visible": true, "customscript/controlnet.py/img2img/Scribble Mode (Invert colors)/value": false, "customscript/controlnet.py/img2img/RGB to BGR/visible": true, "customscript/controlnet.py/img2img/RGB to BGR/value": false, "customscript/controlnet.py/img2img/Low VRAM/visible": true, "customscript/controlnet.py/img2img/Low VRAM/value": false, "customscript/controlnet.py/img2img/Preprocessor/visible": true, "customscript/controlnet.py/img2img/Preprocessor/value": "none", "customscript/controlnet.py/img2img/Model/visible": true, "customscript/controlnet.py/img2img/Model/value": "None", "customscript/controlnet.py/img2img/Weight/visible": true, "customscript/controlnet.py/img2img/Weight/value": 1.0, "customscript/controlnet.py/img2img/Weight/minimum": 0.0, "customscript/controlnet.py/img2img/Weight/maximum": 2.0, "customscript/controlnet.py/img2img/Weight/step": 0.05, "customscript/controlnet.py/img2img/Annotator resolution/visible": true, "customscript/controlnet.py/img2img/Annotator resolution/value": 64, "customscript/controlnet.py/img2img/Annotator resolution/minimum": 64, "customscript/controlnet.py/img2img/Annotator resolution/maximum": 1024, "customscript/controlnet.py/img2img/Annotator resolution/step": 1, "customscript/controlnet.py/img2img/Threshold A/visible": true, "customscript/controlnet.py/img2img/Threshold A/value": 64, "customscript/controlnet.py/img2img/Threshold A/minimum": 64, "customscript/controlnet.py/img2img/Threshold A/maximum": 1024, "customscript/controlnet.py/img2img/Threshold A/step": 1, "customscript/controlnet.py/img2img/Threshold B/visible": true, "customscript/controlnet.py/img2img/Threshold B/value": 64, "customscript/controlnet.py/img2img/Threshold B/minimum": 64, "customscript/controlnet.py/img2img/Threshold B/maximum": 1024, "customscript/controlnet.py/img2img/Threshold B/step": 1, "customscript/controlnet.py/img2img/Resize Mode/visible": true, "customscript/controlnet.py/img2img/Resize Mode/value": "Scale to Fit (Inner Fit)", "img2img/Canvas Width/visible": true, "img2img/Canvas Width/value": 512, "img2img/Canvas Width/minimum": 256, "img2img/Canvas Width/maximum": 1024, "img2img/Canvas Width/step": 64, "img2img/Canvas Height/visible": true, "img2img/Canvas Height/value": 512, "img2img/Canvas Height/minimum": 256, "img2img/Canvas Height/maximum": 1024, "img2img/Canvas Height/step": 64, "customscript/app.py/img2img/Only save background free pictures/visible": true, "customscript/app.py/img2img/Only save background free pictures/value": false, "customscript/xy_grid_plus.py/img2img/Start from Axis/visible": true, "customscript/xy_grid_plus.py/img2img/Start from Axis/value": "X", "customscript/xy_grid_plus.py/img2img/X type/visible": true, "customscript/xy_grid_plus.py/img2img/X type/value": "Seed", "customscript/xy_grid_plus.py/img2img/X values/visible": true, "customscript/xy_grid_plus.py/img2img/X values/value": "", "customscript/xy_grid_plus.py/img2img/Y type/visible": true, "customscript/xy_grid_plus.py/img2img/Y type/value": "Nothing", "customscript/xy_grid_plus.py/img2img/Y values/visible": true, "customscript/xy_grid_plus.py/img2img/Y values/value": "", "img2img/ckpt_01/value": null, "img2img/ckpt_02/value": null, "img2img/ckpt_03/value": null, "img2img/ckpt_04/value": null, "img2img/ckpt_05/value": null, "img2img/ckpt_06/value": null, "img2img/ckpt_07/value": null, "img2img/ckpt_08/value": null, "img2img/ckpt_09/value": null, "img2img/ckpt_10/value": null, "customscript/xy_grid_plus.py/img2img/Include Separate Images/visible": true, "customscript/xy_grid_plus.py/img2img/Include Separate Images/value": true, "customscript/xy_grid_plus.py/img2img/Keep -1 for seeds/visible": true, "customscript/xy_grid_plus.py/img2img/Keep -1 for seeds/value": false, "customscript/xy_grid_plus.py/img2img/Draw Legends/visible": true, "customscript/xy_grid_plus.py/img2img/Draw Legends/value": "Both", "customscript/xy_grid_plus.py/img2img/Restore Checkpoint after process/visible": true, "customscript/xy_grid_plus.py/img2img/Restore Checkpoint after process/value": true, "customscript/xy_grid_plus.py/img2img/Show only favorite Axis Option/visible": true, "customscript/xy_grid_plus.py/img2img/Show only favorite Axis Option/value": false, "customscript/xy_grid_plus.py/img2img/Save PNGinfo to grid/visible": true, "customscript/xy_grid_plus.py/img2img/Save PNGinfo to grid/value": true, "customscript/postprocessing_pixelization.py/extras/Enable pixelization/visible": true, "customscript/postprocessing_pixelization.py/extras/Enable pixelization/value": false, "customscript/postprocessing_pixelization.py/extras/Keep resolution/visible": true, "customscript/postprocessing_pixelization.py/extras/Keep resolution/value": false, "customscript/postprocessing_pixelization.py/extras/Pixel size/visible": true, "customscript/postprocessing_pixelization.py/extras/Pixel size/value": 4, "customscript/postprocessing_pixelization.py/extras/Pixel size/minimum": 1, "customscript/postprocessing_pixelization.py/extras/Pixel size/maximum": 16, "customscript/postprocessing_pixelization.py/extras/Pixel size/step": 1, "customscript/additional_networks.py/txt2img/Enable/visible": true, "customscript/additional_networks.py/txt2img/Enable/value": false, "customscript/additional_networks.py/txt2img/Separate UNet/Text Encoder weights/visible": true, "customscript/additional_networks.py/txt2img/Separate UNet/Text Encoder weights/value": false, "customscript/additional_networks.py/txt2img/Network module 1/visible": true, "customscript/additional_networks.py/txt2img/Network module 1/value": "LoRA", "customscript/additional_networks.py/txt2img/Model 1/visible": true, "customscript/additional_networks.py/txt2img/Model 1/value": "None", "txt2img/Weight 1/visible": true, "txt2img/Weight 1/value": 1.0, "txt2img/Weight 1/minimum": -1.0, "txt2img/Weight 1/maximum": 2.0, "txt2img/Weight 1/step": 0.05, "customscript/additional_networks.py/txt2img/UNet Weight 1/value": 1.0, "customscript/additional_networks.py/txt2img/UNet Weight 1/minimum": -1.0, "customscript/additional_networks.py/txt2img/UNet Weight 1/maximum": 2.0, "customscript/additional_networks.py/txt2img/UNet Weight 1/step": 0.05, "customscript/additional_networks.py/txt2img/TEnc Weight 1/value": 1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 1/minimum": -1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 1/maximum": 2.0, "customscript/additional_networks.py/txt2img/TEnc Weight 1/step": 0.05, "customscript/additional_networks.py/txt2img/Network module 2/visible": true, "customscript/additional_networks.py/txt2img/Network module 2/value": "LoRA", "customscript/additional_networks.py/txt2img/Model 2/visible": true, "customscript/additional_networks.py/txt2img/Model 2/value": "None", "txt2img/Weight 2/visible": true, "txt2img/Weight 2/value": 1.0, "txt2img/Weight 2/minimum": -1.0, "txt2img/Weight 2/maximum": 2.0, "txt2img/Weight 2/step": 0.05, "customscript/additional_networks.py/txt2img/UNet Weight 2/value": 1.0, "customscript/additional_networks.py/txt2img/UNet Weight 2/minimum": -1.0, "customscript/additional_networks.py/txt2img/UNet Weight 2/maximum": 2.0, "customscript/additional_networks.py/txt2img/UNet Weight 2/step": 0.05, "customscript/additional_networks.py/txt2img/TEnc Weight 2/value": 1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 2/minimum": -1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 2/maximum": 2.0, "customscript/additional_networks.py/txt2img/TEnc Weight 2/step": 0.05, "customscript/additional_networks.py/txt2img/Network module 3/visible": true, "customscript/additional_networks.py/txt2img/Network module 3/value": "LoRA", "customscript/additional_networks.py/txt2img/Model 3/visible": true, "customscript/additional_networks.py/txt2img/Model 3/value": "None", "txt2img/Weight 3/visible": true, "txt2img/Weight 3/value": 1.0, "txt2img/Weight 3/minimum": -1.0, "txt2img/Weight 3/maximum": 2.0, "txt2img/Weight 3/step": 0.05, "customscript/additional_networks.py/txt2img/UNet Weight 3/value": 1.0, "customscript/additional_networks.py/txt2img/UNet Weight 3/minimum": -1.0, "customscript/additional_networks.py/txt2img/UNet Weight 3/maximum": 2.0, "customscript/additional_networks.py/txt2img/UNet Weight 3/step": 0.05, "customscript/additional_networks.py/txt2img/TEnc Weight 3/value": 1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 3/minimum": -1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 3/maximum": 2.0, "customscript/additional_networks.py/txt2img/TEnc Weight 3/step": 0.05, "customscript/additional_networks.py/txt2img/Network module 4/visible": true, "customscript/additional_networks.py/txt2img/Network module 4/value": "LoRA", "customscript/additional_networks.py/txt2img/Model 4/visible": true, "customscript/additional_networks.py/txt2img/Model 4/value": "None", "txt2img/Weight 4/visible": true, "txt2img/Weight 4/value": 1.0, "txt2img/Weight 4/minimum": -1.0, "txt2img/Weight 4/maximum": 2.0, "txt2img/Weight 4/step": 0.05, "customscript/additional_networks.py/txt2img/UNet Weight 4/value": 1.0, "customscript/additional_networks.py/txt2img/UNet Weight 4/minimum": -1.0, "customscript/additional_networks.py/txt2img/UNet Weight 4/maximum": 2.0, "customscript/additional_networks.py/txt2img/UNet Weight 4/step": 0.05, "customscript/additional_networks.py/txt2img/TEnc Weight 4/value": 1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 4/minimum": -1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 4/maximum": 2.0, "customscript/additional_networks.py/txt2img/TEnc Weight 4/step": 0.05, "customscript/additional_networks.py/txt2img/Network module 5/visible": true, "customscript/additional_networks.py/txt2img/Network module 5/value": "LoRA", "customscript/additional_networks.py/txt2img/Model 5/visible": true, "customscript/additional_networks.py/txt2img/Model 5/value": "None", "txt2img/Weight 5/visible": true, "txt2img/Weight 5/value": 1.0, "txt2img/Weight 5/minimum": -1.0, "txt2img/Weight 5/maximum": 2.0, "txt2img/Weight 5/step": 0.05, "customscript/additional_networks.py/txt2img/UNet Weight 5/value": 1.0, "customscript/additional_networks.py/txt2img/UNet Weight 5/minimum": -1.0, "customscript/additional_networks.py/txt2img/UNet Weight 5/maximum": 2.0, "customscript/additional_networks.py/txt2img/UNet Weight 5/step": 0.05, "customscript/additional_networks.py/txt2img/TEnc Weight 5/value": 1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 5/minimum": -1.0, "customscript/additional_networks.py/txt2img/TEnc Weight 5/maximum": 2.0, "customscript/additional_networks.py/txt2img/TEnc Weight 5/step": 0.05, "customscript/additional_networks.py/img2img/Enable/visible": true, "customscript/additional_networks.py/img2img/Enable/value": false, "customscript/additional_networks.py/img2img/Separate UNet/Text Encoder weights/visible": true, "customscript/additional_networks.py/img2img/Separate UNet/Text Encoder weights/value": false, "customscript/additional_networks.py/img2img/Network module 1/visible": true, "customscript/additional_networks.py/img2img/Network module 1/value": "LoRA", "customscript/additional_networks.py/img2img/Model 1/visible": true, "customscript/additional_networks.py/img2img/Model 1/value": "None", "img2img/Weight 1/visible": true, "img2img/Weight 1/value": 1.0, "img2img/Weight 1/minimum": -1.0, "img2img/Weight 1/maximum": 2.0, "img2img/Weight 1/step": 0.05, "customscript/additional_networks.py/img2img/UNet Weight 1/value": 1.0, "customscript/additional_networks.py/img2img/UNet Weight 1/minimum": -1.0, "customscript/additional_networks.py/img2img/UNet Weight 1/maximum": 2.0, "customscript/additional_networks.py/img2img/UNet Weight 1/step": 0.05, "customscript/additional_networks.py/img2img/TEnc Weight 1/value": 1.0, "customscript/additional_networks.py/img2img/TEnc Weight 1/minimum": -1.0, "customscript/additional_networks.py/img2img/TEnc Weight 1/maximum": 2.0, "customscript/additional_networks.py/img2img/TEnc Weight 1/step": 0.05, "customscript/additional_networks.py/img2img/Network module 2/visible": true, "customscript/additional_networks.py/img2img/Network module 2/value": "LoRA", "customscript/additional_networks.py/img2img/Model 2/visible": true, "customscript/additional_networks.py/img2img/Model 2/value": "None", "img2img/Weight 2/visible": true, "img2img/Weight 2/value": 1.0, "img2img/Weight 2/minimum": -1.0, "img2img/Weight 2/maximum": 2.0, "img2img/Weight 2/step": 0.05, "customscript/additional_networks.py/img2img/UNet Weight 2/value": 1.0, "customscript/additional_networks.py/img2img/UNet Weight 2/minimum": -1.0, "customscript/additional_networks.py/img2img/UNet Weight 2/maximum": 2.0, "customscript/additional_networks.py/img2img/UNet Weight 2/step": 0.05, "customscript/additional_networks.py/img2img/TEnc Weight 2/value": 1.0, "customscript/additional_networks.py/img2img/TEnc Weight 2/minimum": -1.0, "customscript/additional_networks.py/img2img/TEnc Weight 2/maximum": 2.0, "customscript/additional_networks.py/img2img/TEnc Weight 2/step": 0.05, "customscript/additional_networks.py/img2img/Network module 3/visible": true, "customscript/additional_networks.py/img2img/Network module 3/value": "LoRA", "customscript/additional_networks.py/img2img/Model 3/visible": true, "customscript/additional_networks.py/img2img/Model 3/value": "None", "img2img/Weight 3/visible": true, "img2img/Weight 3/value": 1.0, "img2img/Weight 3/minimum": -1.0, "img2img/Weight 3/maximum": 2.0, "img2img/Weight 3/step": 0.05, "customscript/additional_networks.py/img2img/UNet Weight 3/value": 1.0, "customscript/additional_networks.py/img2img/UNet Weight 3/minimum": -1.0, "customscript/additional_networks.py/img2img/UNet Weight 3/maximum": 2.0, "customscript/additional_networks.py/img2img/UNet Weight 3/step": 0.05, "customscript/additional_networks.py/img2img/TEnc Weight 3/value": 1.0, "customscript/additional_networks.py/img2img/TEnc Weight 3/minimum": -1.0, "customscript/additional_networks.py/img2img/TEnc Weight 3/maximum": 2.0, "customscript/additional_networks.py/img2img/TEnc Weight 3/step": 0.05, "customscript/additional_networks.py/img2img/Network module 4/visible": true, "customscript/additional_networks.py/img2img/Network module 4/value": "LoRA", "customscript/additional_networks.py/img2img/Model 4/visible": true, "customscript/additional_networks.py/img2img/Model 4/value": "None", "img2img/Weight 4/visible": true, "img2img/Weight 4/value": 1.0, "img2img/Weight 4/minimum": -1.0, "img2img/Weight 4/maximum": 2.0, "img2img/Weight 4/step": 0.05, "customscript/additional_networks.py/img2img/UNet Weight 4/value": 1.0, "customscript/additional_networks.py/img2img/UNet Weight 4/minimum": -1.0, "customscript/additional_networks.py/img2img/UNet Weight 4/maximum": 2.0, "customscript/additional_networks.py/img2img/UNet Weight 4/step": 0.05, "customscript/additional_networks.py/img2img/TEnc Weight 4/value": 1.0, "customscript/additional_networks.py/img2img/TEnc Weight 4/minimum": -1.0, "customscript/additional_networks.py/img2img/TEnc Weight 4/maximum": 2.0, "customscript/additional_networks.py/img2img/TEnc Weight 4/step": 0.05, "customscript/additional_networks.py/img2img/Network module 5/visible": true, "customscript/additional_networks.py/img2img/Network module 5/value": "LoRA", "customscript/additional_networks.py/img2img/Model 5/visible": true, "customscript/additional_networks.py/img2img/Model 5/value": "None", "img2img/Weight 5/visible": true, "img2img/Weight 5/value": 1.0, "img2img/Weight 5/minimum": -1.0, "img2img/Weight 5/maximum": 2.0, "img2img/Weight 5/step": 0.05, "customscript/additional_networks.py/img2img/UNet Weight 5/value": 1.0, "customscript/additional_networks.py/img2img/UNet Weight 5/minimum": -1.0, "customscript/additional_networks.py/img2img/UNet Weight 5/maximum": 2.0, "customscript/additional_networks.py/img2img/UNet Weight 5/step": 0.05, "customscript/additional_networks.py/img2img/TEnc Weight 5/value": 1.0, "customscript/additional_networks.py/img2img/TEnc Weight 5/minimum": -1.0, "customscript/additional_networks.py/img2img/TEnc Weight 5/maximum": 2.0, "customscript/additional_networks.py/img2img/TEnc Weight 5/step": 0.05, "customscript/ultimate-upscale.py/img2img/Target size type/visible": true, "customscript/ultimate-upscale.py/img2img/Target size type/value": "From img2img2 settings", "customscript/ultimate-upscale.py/img2img/Custom width/value": 2048, "customscript/ultimate-upscale.py/img2img/Custom width/minimum": 64, "customscript/ultimate-upscale.py/img2img/Custom width/maximum": 8192, "customscript/ultimate-upscale.py/img2img/Custom width/step": 64, "customscript/ultimate-upscale.py/img2img/Custom height/value": 2048, "customscript/ultimate-upscale.py/img2img/Custom height/minimum": 64, "customscript/ultimate-upscale.py/img2img/Custom height/maximum": 8192, "customscript/ultimate-upscale.py/img2img/Custom height/step": 64, "customscript/ultimate-upscale.py/img2img/Scale/value": 2, "customscript/ultimate-upscale.py/img2img/Scale/minimum": 1, "customscript/ultimate-upscale.py/img2img/Scale/maximum": 16, "customscript/ultimate-upscale.py/img2img/Scale/step": 0.01, "customscript/ultimate-upscale.py/img2img/Upscaler/visible": true, "customscript/ultimate-upscale.py/img2img/Upscaler/value": "None", "customscript/ultimate-upscale.py/img2img/Type/visible": true, "customscript/ultimate-upscale.py/img2img/Type/value": "Linear", "customscript/ultimate-upscale.py/img2img/Tile width/visible": true, "customscript/ultimate-upscale.py/img2img/Tile width/value": 512, "customscript/ultimate-upscale.py/img2img/Tile width/minimum": 0, "customscript/ultimate-upscale.py/img2img/Tile width/maximum": 2048, "customscript/ultimate-upscale.py/img2img/Tile width/step": 64, "customscript/ultimate-upscale.py/img2img/Tile height/visible": true, "customscript/ultimate-upscale.py/img2img/Tile height/value": 0, "customscript/ultimate-upscale.py/img2img/Tile height/minimum": 0, "customscript/ultimate-upscale.py/img2img/Tile height/maximum": 2048, "customscript/ultimate-upscale.py/img2img/Tile height/step": 64, "customscript/ultimate-upscale.py/img2img/Mask blur/visible": true, "customscript/ultimate-upscale.py/img2img/Mask blur/value": 8, "customscript/ultimate-upscale.py/img2img/Mask blur/minimum": 0, "customscript/ultimate-upscale.py/img2img/Mask blur/maximum": 64, "customscript/ultimate-upscale.py/img2img/Mask blur/step": 1, "customscript/ultimate-upscale.py/img2img/Padding/visible": true, "customscript/ultimate-upscale.py/img2img/Padding/value": 32, "customscript/ultimate-upscale.py/img2img/Padding/minimum": 0, "customscript/ultimate-upscale.py/img2img/Padding/maximum": 128, "customscript/ultimate-upscale.py/img2img/Padding/step": 1, "customscript/ultimate-upscale.py/img2img/Denoise/value": 0.35, "customscript/ultimate-upscale.py/img2img/Denoise/minimum": 0, "customscript/ultimate-upscale.py/img2img/Denoise/maximum": 1, "customscript/ultimate-upscale.py/img2img/Denoise/step": 0.01, "customscript/ultimate-upscale.py/img2img/Width/value": 64, "customscript/ultimate-upscale.py/img2img/Width/minimum": 0, "customscript/ultimate-upscale.py/img2img/Width/maximum": 128, "customscript/ultimate-upscale.py/img2img/Width/step": 1, "customscript/ultimate-upscale.py/img2img/Upscaled/visible": true, "customscript/ultimate-upscale.py/img2img/Upscaled/value": true, "customscript/ultimate-upscale.py/img2img/Seams fix/visible": true, "customscript/ultimate-upscale.py/img2img/Seams fix/value": false, "customscript/controlnet.py/txt2img/Guidance strength (T)/visible": true, "customscript/controlnet.py/txt2img/Guidance strength (T)/value": 1.0, "customscript/controlnet.py/txt2img/Guidance strength (T)/minimum": 0.0, "customscript/controlnet.py/txt2img/Guidance strength (T)/maximum": 1.0, "customscript/controlnet.py/txt2img/Guidance strength (T)/step": 0.01, "customscript/controlnet.py/img2img/Guidance strength (T)/visible": true, "customscript/controlnet.py/img2img/Guidance strength (T)/value": 1.0, "customscript/controlnet.py/img2img/Guidance strength (T)/minimum": 0.0, "customscript/controlnet.py/img2img/Guidance strength (T)/maximum": 1.0, "customscript/controlnet.py/img2img/Guidance strength (T)/step": 0.01, "customscript/lora_block_weight.py/txt2img/Active/visible": true, "customscript/lora_block_weight.py/txt2img/Active/value": true, "customscript/lora_block_weight.py/txt2img/X Types /visible": true, "customscript/lora_block_weight.py/txt2img/X Types /value": "values", "customscript/lora_block_weight.py/txt2img/X Values /visible": true, "customscript/lora_block_weight.py/txt2img/X Values /value": "0,0.25,0.5,0.75,1", "customscript/lora_block_weight.py/txt2img/Y Types /visible": true, "customscript/lora_block_weight.py/txt2img/Y Types /value": "Block ID", "customscript/lora_block_weight.py/txt2img/Y Values /visible": true, "customscript/lora_block_weight.py/txt2img/Y Values /value": "IN05-OUT05", "customscript/lora_block_weight.py/txt2img/Z type /visible": true, "customscript/lora_block_weight.py/txt2img/Z type /value": "none", "customscript/lora_block_weight.py/txt2img/Z values /visible": true, "customscript/lora_block_weight.py/txt2img/Z values /value": "", "customscript/lora_block_weight.py/txt2img/Range/value": "0.5,1", "customscript/lora_block_weight.py/txt2img/Blocks/value": "BASE,IN01,IN02,IN04,IN05,IN07,IN08,M00,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11", "customscript/lora_block_weight.py/img2img/Active/visible": true, "customscript/lora_block_weight.py/img2img/Active/value": true, "customscript/lora_block_weight.py/img2img/X Types /visible": true, "customscript/lora_block_weight.py/img2img/X Types /value": "values", "customscript/lora_block_weight.py/img2img/X Values /visible": true, "customscript/lora_block_weight.py/img2img/X Values /value": "0,0.25,0.5,0.75,1", "customscript/lora_block_weight.py/img2img/Y Types /visible": true, "customscript/lora_block_weight.py/img2img/Y Types /value": "Block ID", "customscript/lora_block_weight.py/img2img/Y Values /visible": true, "customscript/lora_block_weight.py/img2img/Y Values /value": "IN05-OUT05", "customscript/lora_block_weight.py/img2img/Z type /visible": true, "customscript/lora_block_weight.py/img2img/Z type /value": "none", "customscript/lora_block_weight.py/img2img/Z values /visible": true, "customscript/lora_block_weight.py/img2img/Z values /value": "", "customscript/lora_block_weight.py/img2img/Range/value": "0.5,1", "customscript/lora_block_weight.py/img2img/Blocks/value": "BASE,IN01,IN02,IN04,IN05,IN07,IN08,M00,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11", "customscript/katanuki_dropdown.py/txt2img/Use FP32(for 16X0)/visible": true, "customscript/katanuki_dropdown.py/txt2img/Use FP32(for 16X0)/value": false, "customscript/katanuki_dropdown.py/txt2img/Alt mode/visible": true, "customscript/katanuki_dropdown.py/txt2img/Alt mode/value": true, "customscript/katanuki_dropdown.py/img2img/Use FP32(for 16X0)/visible": true, "customscript/katanuki_dropdown.py/img2img/Use FP32(for 16X0)/value": false, "customscript/katanuki_dropdown.py/img2img/Alt mode/visible": true, "customscript/katanuki_dropdown.py/img2img/Alt mode/value": true, "customscript/video_loopback.py/img2img/input_directory/visible": true, "customscript/video_loopback.py/img2img/input_directory/value": "", "customscript/video_loopback.py/img2img/output_directory/visible": true, "customscript/video_loopback.py/img2img/output_directory/value": "", "customscript/video_loopback.py/img2img/use_mask(inpainting)/visible": true, "customscript/video_loopback.py/img2img/use_mask(inpainting)/value": false, "customscript/video_loopback.py/img2img/mask_directory/visible": true, "customscript/video_loopback.py/img2img/mask_directory/value": "", "customscript/video_loopback.py/img2img/mask_threshold/visible": true, "customscript/video_loopback.py/img2img/mask_threshold/value": 127, "customscript/video_loopback.py/img2img/mask_threshold/minimum": 0, "customscript/video_loopback.py/img2img/mask_threshold/maximum": 255, "customscript/video_loopback.py/img2img/mask_threshold/step": 1, "customscript/video_loopback.py/img2img/read_prompt_from_txt/visible": true, "customscript/video_loopback.py/img2img/read_prompt_from_txt/value": false, "customscript/video_loopback.py/img2img/output_frame_rate/visible": true, "customscript/video_loopback.py/img2img/output_frame_rate/value": 30, "customscript/video_loopback.py/img2img/max_frames/visible": true, "customscript/video_loopback.py/img2img/max_frames/value": 9999, "customscript/video_loopback.py/img2img/extract_nth_frame/visible": true, "customscript/video_loopback.py/img2img/extract_nth_frame/value": 1, "customscript/video_loopback.py/img2img/loop_n/visible": true, "customscript/video_loopback.py/img2img/loop_n/value": 10, "customscript/video_loopback.py/img2img/superimpose_alpha/visible": true, "customscript/video_loopback.py/img2img/superimpose_alpha/value": 0.25, "customscript/video_loopback.py/img2img/superimpose_alpha/minimum": 0, "customscript/video_loopback.py/img2img/superimpose_alpha/maximum": 1, "customscript/video_loopback.py/img2img/superimpose_alpha/step": 0.01, "customscript/video_loopback.py/img2img/fix_seed/visible": true, "customscript/video_loopback.py/img2img/fix_seed/value": true, "customscript/video_loopback.py/img2img/fix_subseed/visible": true, "customscript/video_loopback.py/img2img/fix_subseed/value": false, "customscript/video_loopback.py/img2img/temporal_superimpose_method/visible": true, "customscript/video_loopback.py/img2img/temporal_superimpose_method/value": "simple", "customscript/video_loopback.py/img2img/temporal_superimpose_alpha_list/visible": true, "customscript/video_loopback.py/img2img/temporal_superimpose_alpha_list/value": "1", "customscript/video_loopback.py/img2img/reference_frames_directory/visible": true, "customscript/video_loopback.py/img2img/reference_frames_directory/value": "", "customscript/video_loopback.py/img2img/save_every_loop/visible": true, "customscript/video_loopback.py/img2img/save_every_loop/value": true, "customscript/video_loopback.py/img2img/subseed_strength_schedule/visible": true, "customscript/video_loopback.py/img2img/subseed_strength_schedule/value": "", "customscript/video_loopback.py/img2img/denoising_schedule/visible": true, "customscript/video_loopback.py/img2img/denoising_schedule/value": "", "customscript/video_loopback.py/img2img/seed_schedule/visible": true, "customscript/video_loopback.py/img2img/seed_schedule/value": "", "customscript/video_loopback.py/img2img/subseed_schedule/visible": true, "customscript/video_loopback.py/img2img/subseed_schedule/value": "", "customscript/video_loopback.py/img2img/cfg_schedule/visible": true, "customscript/video_loopback.py/img2img/cfg_schedule/value": "", "customscript/video_loopback.py/img2img/superimpose_alpha_schedule/visible": true, "customscript/video_loopback.py/img2img/superimpose_alpha_schedule/value": "", "customscript/video_loopback.py/img2img/temporal_superimpose_schedule/visible": true, "customscript/video_loopback.py/img2img/temporal_superimpose_schedule/value": "", "customscript/video_loopback.py/img2img/prompt_schedule/visible": true, "customscript/video_loopback.py/img2img/prompt_schedule/value": "", "customscript/video_loopback.py/img2img/negative_prompt_schedule/visible": true, "customscript/video_loopback.py/img2img/negative_prompt_schedule/value": "", "customscript/video_loopback.py/img2img/batch_count_schedule/visible": true, "customscript/video_loopback.py/img2img/batch_count_schedule/value": "", "customscript/video_loopback.py/img2img/image_post_processing_schedule/visible": true, "customscript/video_loopback.py/img2img/image_post_processing_schedule/value": "", "customscript/video_loopback.py/img2img/video_post_process_method/visible": true, "customscript/video_loopback.py/img2img/video_post_process_method/value": "None", "customscript/video_loopback.py/img2img/video_post_process_alpha/visible": true, "customscript/video_loopback.py/img2img/video_post_process_alpha/value": 0.3, "customscript/video_loopback.py/img2img/video_post_process_alpha/minimum": 0, "customscript/video_loopback.py/img2img/video_post_process_alpha/maximum": 1, "customscript/video_loopback.py/img2img/video_post_process_alpha/step": 0.01, "customscript/video_loopback.py/img2img/fastdvdnet_noise_sigma/visible": true, "customscript/video_loopback.py/img2img/fastdvdnet_noise_sigma/value": 60, "customscript/video_loopback.py/img2img/fastdvdnet_noise_sigma/minimum": 0, "customscript/video_loopback.py/img2img/fastdvdnet_noise_sigma/maximum": 255, "customscript/video_loopback.py/img2img/fastdvdnet_noise_sigma/step": 1, "customscript/loopback.py/img2img/Append interrogated prompt at each iteration/visible": true, "customscript/loopback.py/img2img/Append interrogated prompt at each iteration/value": "None", "train/Use PNG alpha channel as loss weight/visible": true, "train/Use PNG alpha channel as loss weight/value": false, "customscript/controlnet.py/txt2img/Guess Mode/visible": true, "customscript/controlnet.py/txt2img/Guess Mode/value": false, "customscript/controlnet.py/img2img/Guess Mode/visible": true, "customscript/controlnet.py/img2img/Guess Mode/value": false, "customscript/controlnet.py/txt2img/Invert Input Color/visible": true, "customscript/controlnet.py/txt2img/Invert Input Color/value": false, "customscript/controlnet.py/txt2img/Guidance Start (T)/visible": true, "customscript/controlnet.py/txt2img/Guidance Start (T)/value": 0.0, "customscript/controlnet.py/txt2img/Guidance Start (T)/minimum": 0.0, "customscript/controlnet.py/txt2img/Guidance Start (T)/maximum": 1.0, "customscript/controlnet.py/txt2img/Guidance Start (T)/step": 0.01, "customscript/controlnet.py/txt2img/Guidance End (T)/visible": true, "customscript/controlnet.py/txt2img/Guidance End (T)/value": 1.0, "customscript/controlnet.py/txt2img/Guidance End (T)/minimum": 0.0, "customscript/controlnet.py/txt2img/Guidance End (T)/maximum": 1.0, "customscript/controlnet.py/txt2img/Guidance End (T)/step": 0.01, "customscript/composable_lora_script.py/txt2img/Enabled/visible": true, "customscript/composable_lora_script.py/txt2img/Enabled/value": false, "customscript/composable_lora_script.py/txt2img/Use Lora in uc text model encoder/visible": true, "customscript/composable_lora_script.py/txt2img/Use Lora in uc text model encoder/value": false, "customscript/composable_lora_script.py/txt2img/Use Lora in uc diffusion model/visible": true, "customscript/composable_lora_script.py/txt2img/Use Lora in uc diffusion model/value": false, "customscript/movie2movie.py/txt2img/Duration/visible": true, "customscript/movie2movie.py/txt2img/Duration/value": 50.0, "customscript/movie2movie.py/txt2img/Duration/minimum": 10.0, "customscript/movie2movie.py/txt2img/Duration/maximum": 200.0, "customscript/movie2movie.py/txt2img/Duration/step": 10, "customscript/controlnet.py/img2img/Invert Input Color/visible": true, "customscript/controlnet.py/img2img/Invert Input Color/value": false, "customscript/controlnet.py/img2img/Guidance Start (T)/visible": true, "customscript/controlnet.py/img2img/Guidance Start (T)/value": 0.0, "customscript/controlnet.py/img2img/Guidance Start (T)/minimum": 0.0, "customscript/controlnet.py/img2img/Guidance Start (T)/maximum": 1.0, "customscript/controlnet.py/img2img/Guidance Start (T)/step": 0.01, "customscript/controlnet.py/img2img/Guidance End (T)/visible": true, "customscript/controlnet.py/img2img/Guidance End (T)/value": 1.0, "customscript/controlnet.py/img2img/Guidance End (T)/minimum": 0.0, "customscript/controlnet.py/img2img/Guidance End (T)/maximum": 1.0, "customscript/controlnet.py/img2img/Guidance End (T)/step": 0.01, "customscript/composable_lora_script.py/img2img/Enabled/visible": true, "customscript/composable_lora_script.py/img2img/Enabled/value": false, "customscript/composable_lora_script.py/img2img/Use Lora in uc text model encoder/visible": true, "customscript/composable_lora_script.py/img2img/Use Lora in uc text model encoder/value": false, "customscript/composable_lora_script.py/img2img/Use Lora in uc diffusion model/visible": true, "customscript/composable_lora_script.py/img2img/Use Lora in uc diffusion model/value": false, "customscript/movie2movie.py/img2img/Duration/visible": true, "customscript/movie2movie.py/img2img/Duration/value": 50.0, "customscript/movie2movie.py/img2img/Duration/minimum": 10.0, "customscript/movie2movie.py/img2img/Duration/maximum": 200.0, "customscript/movie2movie.py/img2img/Duration/step": 10, "customscript/llul.py/txt2img/Enabled/visible": true, "customscript/llul.py/txt2img/Enabled/value": false, "customscript/llul.py/txt2img/Multiplication (2^N)/visible": true, "customscript/llul.py/txt2img/Multiplication (2^N)/value": 1, "customscript/llul.py/txt2img/Multiplication (2^N)/minimum": 1, "customscript/llul.py/txt2img/Multiplication (2^N)/maximum": 5, "customscript/llul.py/txt2img/Multiplication (2^N)/step": 1, "customscript/llul.py/txt2img/Weight/visible": true, "customscript/llul.py/txt2img/Weight/value": 0.15, "customscript/llul.py/txt2img/Weight/minimum": -1, "customscript/llul.py/txt2img/Weight/maximum": 2, "customscript/llul.py/txt2img/Weight/step": 0.01, "customscript/llul.py/txt2img/Force convert half to float on interpolation (for some platforms)/visible": true, "customscript/llul.py/txt2img/Force convert half to float on interpolation (for some platforms)/value": false, "customscript/llul.py/txt2img/I know what I am doing./visible": true, "customscript/llul.py/txt2img/I know what I am doing./value": false, "customscript/llul.py/txt2img/Layers/visible": true, "customscript/llul.py/txt2img/Layers/value": "OUT", "customscript/llul.py/txt2img/Start steps/visible": true, "customscript/llul.py/txt2img/Start steps/value": 5, "customscript/llul.py/txt2img/Start steps/minimum": 1, "customscript/llul.py/txt2img/Start steps/maximum": 300, "customscript/llul.py/txt2img/Start steps/step": 1, "customscript/llul.py/txt2img/Max steps/visible": true, "customscript/llul.py/txt2img/Max steps/value": 0, "customscript/llul.py/txt2img/Max steps/minimum": 0, "customscript/llul.py/txt2img/Max steps/maximum": 300, "customscript/llul.py/txt2img/Max steps/step": 1, "customscript/llul.py/txt2img/Upscaling/visible": true, "customscript/llul.py/txt2img/Upscaling/value": "Bilinear", "customscript/llul.py/txt2img/Enable AA for Upscaling./visible": true, "customscript/llul.py/txt2img/Enable AA for Upscaling./value": false, "customscript/llul.py/txt2img/Downscaling/visible": true, "customscript/llul.py/txt2img/Downscaling/value": "Pooling Max", "customscript/llul.py/txt2img/Enable AA for Downscaling./visible": true, "customscript/llul.py/txt2img/Enable AA for Downscaling./value": false, "customscript/llul.py/txt2img/interpolation method/visible": true, "customscript/llul.py/txt2img/interpolation method/value": "Lerp", "customscript/llul.py/img2img/Enabled/visible": true, "customscript/llul.py/img2img/Enabled/value": false, "customscript/llul.py/img2img/Multiplication (2^N)/visible": true, "customscript/llul.py/img2img/Multiplication (2^N)/value": 1, "customscript/llul.py/img2img/Multiplication (2^N)/minimum": 1, "customscript/llul.py/img2img/Multiplication (2^N)/maximum": 5, "customscript/llul.py/img2img/Multiplication (2^N)/step": 1, "customscript/llul.py/img2img/Weight/visible": true, "customscript/llul.py/img2img/Weight/value": 0.15, "customscript/llul.py/img2img/Weight/minimum": -1, "customscript/llul.py/img2img/Weight/maximum": 2, "customscript/llul.py/img2img/Weight/step": 0.01, "customscript/llul.py/img2img/Force convert half to float on interpolation (for some platforms)/visible": true, "customscript/llul.py/img2img/Force convert half to float on interpolation (for some platforms)/value": false, "customscript/llul.py/img2img/I know what I am doing./visible": true, "customscript/llul.py/img2img/I know what I am doing./value": false, "customscript/llul.py/img2img/Layers/visible": true, "customscript/llul.py/img2img/Layers/value": "OUT", "customscript/llul.py/img2img/Start steps/visible": true, "customscript/llul.py/img2img/Start steps/value": 5, "customscript/llul.py/img2img/Start steps/minimum": 1, "customscript/llul.py/img2img/Start steps/maximum": 300, "customscript/llul.py/img2img/Start steps/step": 1, "customscript/llul.py/img2img/Max steps/visible": true, "customscript/llul.py/img2img/Max steps/value": 0, "customscript/llul.py/img2img/Max steps/minimum": 0, "customscript/llul.py/img2img/Max steps/maximum": 300, "customscript/llul.py/img2img/Max steps/step": 1, "customscript/llul.py/img2img/Upscaling/visible": true, "customscript/llul.py/img2img/Upscaling/value": "Bilinear", "customscript/llul.py/img2img/Enable AA for Upscaling./visible": true, "customscript/llul.py/img2img/Enable AA for Upscaling./value": false, "customscript/llul.py/img2img/Downscaling/visible": true, "customscript/llul.py/img2img/Downscaling/value": "Pooling Max", "customscript/llul.py/img2img/Enable AA for Downscaling./visible": true, "customscript/llul.py/img2img/Enable AA for Downscaling./value": false, "customscript/llul.py/img2img/interpolation method/visible": true, "customscript/llul.py/img2img/interpolation method/value": "Lerp", "customscript/two_shot.py/txt2img/Enabled/visible": true, "customscript/two_shot.py/txt2img/Enabled/value": false, "customscript/two_shot.py/txt2img/Divisions/visible": true, "customscript/two_shot.py/txt2img/Divisions/value": "1:1,1:2,1:2", "customscript/two_shot.py/txt2img/Positions/visible": true, "customscript/two_shot.py/txt2img/Positions/value": "0:0,0:0,0:1", "customscript/two_shot.py/txt2img/Weights/visible": true, "customscript/two_shot.py/txt2img/Weights/value": "0.2,0.8,0.8", "customscript/two_shot.py/txt2img/end at this step/visible": true, "customscript/two_shot.py/txt2img/end at this step/value": 20, "customscript/two_shot.py/txt2img/end at this step/minimum": 0, "customscript/two_shot.py/txt2img/end at this step/maximum": 150, "customscript/two_shot.py/txt2img/end at this step/step": 1, "txt2img/Extra generation params/visible": true, "txt2img/Extra generation params/value": "", "customscript/two_shot.py/img2img/Enabled/visible": true, "customscript/two_shot.py/img2img/Enabled/value": false, "customscript/two_shot.py/img2img/Divisions/visible": true, "customscript/two_shot.py/img2img/Divisions/value": "1:1,1:2,1:2", "customscript/two_shot.py/img2img/Positions/visible": true, "customscript/two_shot.py/img2img/Positions/value": "0:0,0:0,0:1", "customscript/two_shot.py/img2img/Weights/visible": true, "customscript/two_shot.py/img2img/Weights/value": "0.2,0.8,0.8", "customscript/two_shot.py/img2img/end at this step/visible": true, "customscript/two_shot.py/img2img/end at this step/value": 20, "customscript/two_shot.py/img2img/end at this step/minimum": 0, "customscript/two_shot.py/img2img/end at this step/maximum": 150, "customscript/two_shot.py/img2img/end at this step/step": 1, "img2img/Extra generation params/visible": true, "img2img/Extra generation params/value": "", "txt2img/Enable/visible": true, "txt2img/Enable/value": false, "txt2img/Invert Input Color/visible": true, "txt2img/Invert Input Color/value": false, "txt2img/RGB to BGR/visible": true, "txt2img/RGB to BGR/value": false, "txt2img/Low VRAM/visible": true, "txt2img/Low VRAM/value": false, "txt2img/Guess Mode/visible": true, "txt2img/Guess Mode/value": false, "txt2img/Preprocessor/visible": true, "txt2img/Preprocessor/value": "none", "txt2img/Model/visible": true, "txt2img/Model/value": "None", "txt2img/Weight/visible": true, "txt2img/Weight/value": 1.0, "txt2img/Weight/minimum": 0.0, "txt2img/Weight/maximum": 2.0, "txt2img/Weight/step": 0.05, "txt2img/Guidance Start (T)/visible": true, "txt2img/Guidance Start (T)/value": 0.0, "txt2img/Guidance Start (T)/minimum": 0.0, "txt2img/Guidance Start (T)/maximum": 1.0, "txt2img/Guidance Start (T)/step": 0.01, "txt2img/Guidance End (T)/visible": true, "txt2img/Guidance End (T)/value": 1.0, "txt2img/Guidance End (T)/minimum": 0.0, "txt2img/Guidance End (T)/maximum": 1.0, "txt2img/Guidance End (T)/step": 0.01, "txt2img/Annotator resolution/visible": true, "txt2img/Annotator resolution/value": 64, "txt2img/Annotator resolution/minimum": 64, "txt2img/Annotator resolution/maximum": 2048, "txt2img/Annotator resolution/step": 10, "txt2img/Threshold A/visible": true, "txt2img/Threshold A/value": 64, "txt2img/Threshold A/minimum": 64, "txt2img/Threshold A/maximum": 1024, "txt2img/Threshold A/step": 1, "txt2img/Threshold B/visible": true, "txt2img/Threshold B/value": 64, "txt2img/Threshold B/minimum": 64, "txt2img/Threshold B/maximum": 1024, "txt2img/Threshold B/step": 1, "txt2img/Resize Mode/visible": true, "txt2img/Resize Mode/value": "Scale to Fit (Inner Fit)", "img2img/Enable/visible": true, "img2img/Enable/value": false, "img2img/Invert Input Color/visible": true, "img2img/Invert Input Color/value": false, "img2img/RGB to BGR/visible": true, "img2img/RGB to BGR/value": false, "img2img/Low VRAM/visible": true, "img2img/Low VRAM/value": false, "img2img/Guess Mode/visible": true, "img2img/Guess Mode/value": false, "img2img/Preprocessor/visible": true, "img2img/Preprocessor/value": "none", "img2img/Model/visible": true, "img2img/Model/value": "None", "img2img/Weight/visible": true, "img2img/Weight/value": 1.0, "img2img/Weight/minimum": 0.0, "img2img/Weight/maximum": 2.0, "img2img/Weight/step": 0.05, "img2img/Guidance Start (T)/visible": true, "img2img/Guidance Start (T)/value": 0.0, "img2img/Guidance Start (T)/minimum": 0.0, "img2img/Guidance Start (T)/maximum": 1.0, "img2img/Guidance Start (T)/step": 0.01, "img2img/Guidance End (T)/visible": true, "img2img/Guidance End (T)/value": 1.0, "img2img/Guidance End (T)/minimum": 0.0, "img2img/Guidance End (T)/maximum": 1.0, "img2img/Guidance End (T)/step": 0.01, "img2img/Annotator resolution/visible": true, "img2img/Annotator resolution/value": 64, "img2img/Annotator resolution/minimum": 64, "img2img/Annotator resolution/maximum": 2048, "img2img/Annotator resolution/step": 10, "img2img/Threshold A/visible": true, "img2img/Threshold A/value": 64, "img2img/Threshold A/minimum": 64, "img2img/Threshold A/maximum": 1024, "img2img/Threshold A/step": 1, "img2img/Threshold B/visible": true, "img2img/Threshold B/value": 64, "img2img/Threshold B/minimum": 64, "img2img/Threshold B/maximum": 1024, "img2img/Threshold B/step": 1, "img2img/Resize Mode/visible": true, "img2img/Resize Mode/value": "Scale to Fit (Inner Fit)", "customscript/video_loopback.py/img2img/is_continue (ignore the \"extract_nth_frame\" for input frames only)/visible": true, "customscript/video_loopback.py/img2img/is_continue (ignore the \"extract_nth_frame\" for input frames only)/value": false, "customscript/video_loopback.py/img2img/step_schedule/visible": true, "customscript/video_loopback.py/img2img/step_schedule/value": "", "customscript/movie2movie.py/txt2img/Save preprocessed/visible": true, "customscript/movie2movie.py/txt2img/Save preprocessed/value": false, "customscript/loopback.py/img2img/Final denoising strength/visible": true, "customscript/loopback.py/img2img/Final denoising strength/value": 0.5, "customscript/loopback.py/img2img/Final denoising strength/minimum": 0, "customscript/loopback.py/img2img/Final denoising strength/maximum": 1, "customscript/loopback.py/img2img/Final denoising strength/step": 0.01, "customscript/loopback.py/img2img/Denoising strength curve/visible": true, "customscript/loopback.py/img2img/Denoising strength curve/value": "Linear", "customscript/movie2movie.py/img2img/Save preprocessed/visible": true, "customscript/movie2movie.py/img2img/Save preprocessed/value": false, "customscript/posex.py/txt2img/Send this image to ControlNet./visible": true, "customscript/posex.py/txt2img/Send this image to ControlNet./value": false, "customscript/posex.py/txt2img/Target ControlNet number/value": 0, "customscript/cutoff.py/txt2img/Enabled/visible": true, "customscript/cutoff.py/txt2img/Enabled/value": false, "customscript/cutoff.py/txt2img/Target tokens (comma separated)/visible": true, "customscript/cutoff.py/txt2img/Target tokens (comma separated)/value": "", "customscript/cutoff.py/txt2img/Weight/visible": true, "customscript/cutoff.py/txt2img/Weight/value": 0.5, "customscript/cutoff.py/txt2img/Weight/minimum": -1.0, "customscript/cutoff.py/txt2img/Weight/maximum": 2.0, "customscript/cutoff.py/txt2img/Weight/step": 0.01, "customscript/cutoff.py/txt2img/Disable for Negative prompt./visible": true, "customscript/cutoff.py/txt2img/Disable for Negative prompt./value": true, "customscript/cutoff.py/txt2img/Cutoff strongly./visible": true, "customscript/cutoff.py/txt2img/Cutoff strongly./value": false, "customscript/cutoff.py/txt2img/Padding token (ID or single token)/visible": true, "customscript/cutoff.py/txt2img/Padding token (ID or single token)/value": "", "customscript/cutoff.py/txt2img/Interpolation method/visible": true, "customscript/cutoff.py/txt2img/Interpolation method/value": "Lerp", "customscript/cutoff.py/txt2img/Debug log/visible": true, "customscript/cutoff.py/txt2img/Debug log/value": false, "customscript/posex.py/img2img/Send this image to ControlNet./visible": true, "customscript/posex.py/img2img/Send this image to ControlNet./value": false, "customscript/posex.py/img2img/Target ControlNet number/value": 0, "customscript/cutoff.py/img2img/Enabled/visible": true, "customscript/cutoff.py/img2img/Enabled/value": false, "customscript/cutoff.py/img2img/Target tokens (comma separated)/visible": true, "customscript/cutoff.py/img2img/Target tokens (comma separated)/value": "", "customscript/cutoff.py/img2img/Weight/visible": true, "customscript/cutoff.py/img2img/Weight/value": 0.5, "customscript/cutoff.py/img2img/Weight/minimum": -1.0, "customscript/cutoff.py/img2img/Weight/maximum": 2.0, "customscript/cutoff.py/img2img/Weight/step": 0.01, "customscript/cutoff.py/img2img/Disable for Negative prompt./visible": true, "customscript/cutoff.py/img2img/Disable for Negative prompt./value": true, "customscript/cutoff.py/img2img/Cutoff strongly./visible": true, "customscript/cutoff.py/img2img/Cutoff strongly./value": false, "customscript/cutoff.py/img2img/Padding token (ID or single token)/visible": true, "customscript/cutoff.py/img2img/Padding token (ID or single token)/value": "", "customscript/cutoff.py/img2img/Interpolation method/visible": true, "customscript/cutoff.py/img2img/Interpolation method/value": "Lerp", "customscript/cutoff.py/img2img/Debug log/visible": true, "customscript/cutoff.py/img2img/Debug log/value": false, "txt2img/\ud83d\udcc2/visible": true, "txt2img/Save/visible": true, "txt2img/Zip/visible": true, "txt2img/Send to img2img/visible": true, "txt2img/Send to inpaint/visible": true, "txt2img/Send to extras/visible": true, "txt2img/Interrupt/visible": true, "txt2img/Skip/visible": true, "txt2img/Generate/visible": true, "txt2img/\u267e/visible": true, "txt2img/\ud83d\udd04/visible": true, "txt2img/\u2199\ufe0f/visible": true, "txt2img/\ud83d\uddd1\ufe0f/visible": true, "txt2img/\ud83c\udfb4/visible": true, "txt2img/\ud83d\udccb/visible": true, "txt2img/\ud83d\udcbe/visible": true, "txt2img/Tabs@txt2img_extra_tabs/selected": null, "txt2img/Hires sampling method/visible": true, "txt2img/Hires sampling method/value": "Use same sampler", "txt2img/\u2b80/visible": true, "txt2img/\ud83c\udfb2\ufe0f/visible": true, "txt2img/\u267b\ufe0f/visible": true, "txt2img/\ud83d\udcdd/visible": true, "customscript/additional_networks.py/txt2img/Refresh models/visible": true, "customscript/!adetailer.py/txt2img/Enable ADetailer/visible": true, "customscript/!adetailer.py/txt2img/Enable ADetailer/value": false, "txt2img/ADetailer model/visible": true, "txt2img/ADetailer model/value": "face_yolov8n.pt", "txt2img/ad_prompt/visible": true, "txt2img/ad_prompt/value": "", "txt2img/ad_negative_prompt/visible": true, "txt2img/ad_negative_prompt/value": "", "txt2img/Detection model confidence threshold/visible": true, "txt2img/Detection model confidence threshold/value": 0.3, "txt2img/Detection model confidence threshold/minimum": 0.0, "txt2img/Detection model confidence threshold/maximum": 1.0, "txt2img/Detection model confidence threshold/step": 0.01, "txt2img/Mask min area ratio/visible": true, "txt2img/Mask min area ratio/value": 0.0, "txt2img/Mask min area ratio/minimum": 0.0, "txt2img/Mask min area ratio/maximum": 1.0, "txt2img/Mask min area ratio/step": 0.001, "txt2img/Mask max area ratio/visible": true, "txt2img/Mask max area ratio/value": 1.0, "txt2img/Mask max area ratio/minimum": 0.0, "txt2img/Mask max area ratio/maximum": 1.0, "txt2img/Mask max area ratio/step": 0.001, "txt2img/Mask x(\u2192) offset/visible": true, "txt2img/Mask x(\u2192) offset/value": 0, "txt2img/Mask x(\u2192) offset/minimum": -200, "txt2img/Mask x(\u2192) offset/maximum": 200, "txt2img/Mask x(\u2192) offset/step": 1, "txt2img/Mask y(\u2191) offset/visible": true, "txt2img/Mask y(\u2191) offset/value": 0, "txt2img/Mask y(\u2191) offset/minimum": -200, "txt2img/Mask y(\u2191) offset/maximum": 200, "txt2img/Mask y(\u2191) offset/step": 1, "txt2img/Mask erosion (-) / dilation (+)/visible": true, "txt2img/Mask erosion (-) / dilation (+)/value": 4, "txt2img/Mask erosion (-) / dilation (+)/minimum": -128, "txt2img/Mask erosion (-) / dilation (+)/maximum": 128, "txt2img/Mask erosion (-) / dilation (+)/step": 4, "txt2img/Mask merge mode/visible": true, "txt2img/Mask merge mode/value": "None", "txt2img/Inpaint mask blur/visible": true, "txt2img/Inpaint mask blur/value": 4, "txt2img/Inpaint mask blur/minimum": 0, "txt2img/Inpaint mask blur/maximum": 64, "txt2img/Inpaint mask blur/step": 1, "txt2img/Inpaint denoising strength/visible": true, "txt2img/Inpaint denoising strength/value": 0.4, "txt2img/Inpaint denoising strength/minimum": 0.0, "txt2img/Inpaint denoising strength/maximum": 1.0, "txt2img/Inpaint denoising strength/step": 0.01, "txt2img/Inpaint only masked/visible": true, "txt2img/Inpaint only masked/value": true, "txt2img/Inpaint only masked padding, pixels/visible": true, "txt2img/Inpaint only masked padding, pixels/value": 32, "txt2img/Inpaint only masked padding, pixels/minimum": 0, "txt2img/Inpaint only masked padding, pixels/maximum": 256, "txt2img/Inpaint only masked padding, pixels/step": 4, "txt2img/Use separate width/height/visible": true, "txt2img/Use separate width/height/value": false, "txt2img/inpaint width/visible": true, "txt2img/inpaint width/value": 512, "txt2img/inpaint width/minimum": 64, "txt2img/inpaint width/maximum": 2048, "txt2img/inpaint width/step": 4, "txt2img/inpaint height/visible": true, "txt2img/inpaint height/value": 512, "txt2img/inpaint height/minimum": 64, "txt2img/inpaint height/maximum": 2048, "txt2img/inpaint height/step": 4, "txt2img/Use separate steps/visible": true, "txt2img/Use separate steps/value": false, "txt2img/ADetailer steps/visible": true, "txt2img/ADetailer steps/value": 28, "txt2img/ADetailer steps/minimum": 1, "txt2img/ADetailer steps/maximum": 150, "txt2img/ADetailer steps/step": 1, "txt2img/Use separate CFG scale/visible": true, "txt2img/Use separate CFG scale/value": false, "txt2img/ADetailer CFG scale/visible": true, "txt2img/ADetailer CFG scale/value": 7.0, "txt2img/ADetailer CFG scale/minimum": 0.0, "txt2img/ADetailer CFG scale/maximum": 30.0, "txt2img/ADetailer CFG scale/step": 0.5, "txt2img/Restore faces after ADetailer/visible": true, "txt2img/Restore faces after ADetailer/value": false, "txt2img/ControlNet model/visible": true, "txt2img/ControlNet model/value": "None", "txt2img/ControlNet weight/visible": true, "txt2img/ControlNet weight/value": 1.0, "txt2img/ControlNet weight/minimum": 0.0, "txt2img/ControlNet weight/maximum": 1.0, "txt2img/ControlNet weight/step": 0.01, "txt2img/ControlNet guidance start/visible": true, "txt2img/ControlNet guidance start/value": 0.0, "txt2img/ControlNet guidance start/minimum": 0.0, "txt2img/ControlNet guidance start/maximum": 1.0, "txt2img/ControlNet guidance start/step": 0.01, "txt2img/ControlNet guidance end/visible": true, "txt2img/ControlNet guidance end/value": 1.0, "txt2img/ControlNet guidance end/minimum": 0.0, "txt2img/ControlNet guidance end/maximum": 1.0, "txt2img/ControlNet guidance end/step": 0.01, "txt2img/ADetailer model 2nd/visible": true, "txt2img/ADetailer model 2nd/value": "None", "txt2img/ad_prompt 2nd/visible": true, "txt2img/ad_prompt 2nd/value": "", "txt2img/ad_negative_prompt 2nd/visible": true, "txt2img/ad_negative_prompt 2nd/value": "", "txt2img/Detection model confidence threshold 2nd/visible": true, "txt2img/Detection model confidence threshold 2nd/value": 0.3, "txt2img/Detection model confidence threshold 2nd/minimum": 0.0, "txt2img/Detection model confidence threshold 2nd/maximum": 1.0, "txt2img/Detection model confidence threshold 2nd/step": 0.01, "txt2img/Mask min area ratio 2nd/visible": true, "txt2img/Mask min area ratio 2nd/value": 0.0, "txt2img/Mask min area ratio 2nd/minimum": 0.0, "txt2img/Mask min area ratio 2nd/maximum": 1.0, "txt2img/Mask min area ratio 2nd/step": 0.001, "txt2img/Mask max area ratio 2nd/visible": true, "txt2img/Mask max area ratio 2nd/value": 1.0, "txt2img/Mask max area ratio 2nd/minimum": 0.0, "txt2img/Mask max area ratio 2nd/maximum": 1.0, "txt2img/Mask max area ratio 2nd/step": 0.001, "txt2img/Mask x(\u2192) offset 2nd/visible": true, "txt2img/Mask x(\u2192) offset 2nd/value": 0, "txt2img/Mask x(\u2192) offset 2nd/minimum": -200, "txt2img/Mask x(\u2192) offset 2nd/maximum": 200, "txt2img/Mask x(\u2192) offset 2nd/step": 1, "txt2img/Mask y(\u2191) offset 2nd/visible": true, "txt2img/Mask y(\u2191) offset 2nd/value": 0, "txt2img/Mask y(\u2191) offset 2nd/minimum": -200, "txt2img/Mask y(\u2191) offset 2nd/maximum": 200, "txt2img/Mask y(\u2191) offset 2nd/step": 1, "txt2img/Mask erosion (-) / dilation (+) 2nd/visible": true, "txt2img/Mask erosion (-) / dilation (+) 2nd/value": 4, "txt2img/Mask erosion (-) / dilation (+) 2nd/minimum": -128, "txt2img/Mask erosion (-) / dilation (+) 2nd/maximum": 128, "txt2img/Mask erosion (-) / dilation (+) 2nd/step": 4, "txt2img/Mask merge mode 2nd/visible": true, "txt2img/Mask merge mode 2nd/value": "None", "txt2img/Inpaint mask blur 2nd/visible": true, "txt2img/Inpaint mask blur 2nd/value": 4, "txt2img/Inpaint mask blur 2nd/minimum": 0, "txt2img/Inpaint mask blur 2nd/maximum": 64, "txt2img/Inpaint mask blur 2nd/step": 1, "txt2img/Inpaint denoising strength 2nd/visible": true, "txt2img/Inpaint denoising strength 2nd/value": 0.4, "txt2img/Inpaint denoising strength 2nd/minimum": 0.0, "txt2img/Inpaint denoising strength 2nd/maximum": 1.0, "txt2img/Inpaint denoising strength 2nd/step": 0.01, "txt2img/Inpaint only masked 2nd/visible": true, "txt2img/Inpaint only masked 2nd/value": true, "txt2img/Inpaint only masked padding, pixels 2nd/visible": true, "txt2img/Inpaint only masked padding, pixels 2nd/value": 32, "txt2img/Inpaint only masked padding, pixels 2nd/minimum": 0, "txt2img/Inpaint only masked padding, pixels 2nd/maximum": 256, "txt2img/Inpaint only masked padding, pixels 2nd/step": 4, "txt2img/Use separate width/height 2nd/visible": true, "txt2img/Use separate width/height 2nd/value": false, "txt2img/inpaint width 2nd/visible": true, "txt2img/inpaint width 2nd/value": 512, "txt2img/inpaint width 2nd/minimum": 64, "txt2img/inpaint width 2nd/maximum": 2048, "txt2img/inpaint width 2nd/step": 4, "txt2img/inpaint height 2nd/visible": true, "txt2img/inpaint height 2nd/value": 512, "txt2img/inpaint height 2nd/minimum": 64, "txt2img/inpaint height 2nd/maximum": 2048, "txt2img/inpaint height 2nd/step": 4, "txt2img/Use separate steps 2nd/visible": true, "txt2img/Use separate steps 2nd/value": false, "txt2img/ADetailer steps 2nd/visible": true, "txt2img/ADetailer steps 2nd/value": 28, "txt2img/ADetailer steps 2nd/minimum": 1, "txt2img/ADetailer steps 2nd/maximum": 150, "txt2img/ADetailer steps 2nd/step": 1, "txt2img/Use separate CFG scale 2nd/visible": true, "txt2img/Use separate CFG scale 2nd/value": false, "txt2img/ADetailer CFG scale 2nd/visible": true, "txt2img/ADetailer CFG scale 2nd/value": 7.0, "txt2img/ADetailer CFG scale 2nd/minimum": 0.0, "txt2img/ADetailer CFG scale 2nd/maximum": 30.0, "txt2img/ADetailer CFG scale 2nd/step": 0.5, "txt2img/Restore faces after ADetailer 2nd/visible": true, "txt2img/Restore faces after ADetailer 2nd/value": false, "txt2img/ControlNet model 2nd/visible": true, "txt2img/ControlNet model 2nd/value": "None", "txt2img/ControlNet weight 2nd/visible": true, "txt2img/ControlNet weight 2nd/value": 1.0, "txt2img/ControlNet weight 2nd/minimum": 0.0, "txt2img/ControlNet weight 2nd/maximum": 1.0, "txt2img/ControlNet weight 2nd/step": 0.01, "txt2img/ControlNet guidance start 2nd/visible": true, "txt2img/ControlNet guidance start 2nd/value": 0.0, "txt2img/ControlNet guidance start 2nd/minimum": 0.0, "txt2img/ControlNet guidance start 2nd/maximum": 1.0, "txt2img/ControlNet guidance start 2nd/step": 0.01, "txt2img/ControlNet guidance end 2nd/visible": true, "txt2img/ControlNet guidance end 2nd/value": 1.0, "txt2img/ControlNet guidance end 2nd/minimum": 0.0, "txt2img/ControlNet guidance end 2nd/maximum": 1.0, "txt2img/ControlNet guidance end 2nd/step": 0.01, "txt2img/Input Directory/visible": true, "txt2img/Input Directory/value": "", "txt2img/New Canvas Width/visible": true, "txt2img/New Canvas Width/value": 512, "txt2img/New Canvas Width/minimum": 256, "txt2img/New Canvas Width/maximum": 1024, "txt2img/New Canvas Width/step": 64, "txt2img/New Canvas Height/visible": true, "txt2img/New Canvas Height/value": 512, "txt2img/New Canvas Height/minimum": 256, "txt2img/New Canvas Height/maximum": 1024, "txt2img/New Canvas Height/step": 64, "txt2img/Create New Canvas/visible": true, "txt2img/Cancel/visible": true, "txt2img/Pixel Perfect/visible": true, "txt2img/Pixel Perfect/value": false, "txt2img/Allow Preview/visible": true, "txt2img/Allow Preview/value": false, "txt2img/Control Type/visible": true, "txt2img/Control Type/value": "All", "txt2img/Control Weight/visible": true, "txt2img/Control Weight/value": 1.0, "txt2img/Control Weight/minimum": 0.0, "txt2img/Control Weight/maximum": 2.0, "txt2img/Control Weight/step": 0.05, "txt2img/Starting Control Step/visible": true, "txt2img/Starting Control Step/value": 0.0, "txt2img/Starting Control Step/minimum": 0.0, "txt2img/Starting Control Step/maximum": 1.0, "txt2img/Starting Control Step/step": 0.01, "txt2img/Ending Control Step/visible": true, "txt2img/Ending Control Step/value": 1.0, "txt2img/Ending Control Step/minimum": 0.0, "txt2img/Ending Control Step/maximum": 1.0, "txt2img/Ending Control Step/step": 0.01, "txt2img/Preprocessor resolution/value": 512, "txt2img/Preprocessor resolution/minimum": 64, "txt2img/Preprocessor resolution/maximum": 2048, "txt2img/Preprocessor resolution/step": 10, "txt2img/Control Mode/visible": true, "txt2img/Control Mode/value": "Balanced", "txt2img/[Loopback] Automatically send generated images to this ControlNet unit/visible": true, "txt2img/[Loopback] Automatically send generated images to this ControlNet unit/value": false, "customscript/lora_block_weight.py/txt2img/diff image color/visible": true, "customscript/lora_block_weight.py/txt2img/diff image color/value": "black", "customscript/lora_block_weight.py/txt2img/change X-Y/visible": true, "customscript/lora_block_weight.py/txt2img/change X-Y/value": false, "customscript/lora_block_weight.py/txt2img/difference threshold/visible": true, "customscript/lora_block_weight.py/txt2img/difference threshold/value": "20", "customscript/lora_block_weight.py/txt2img/number of seed/visible": true, "customscript/lora_block_weight.py/txt2img/number of seed/value": 1.0, "txt2img/Reload Presets/visible": true, "txt2img/Reload Tags/visible": true, "txt2img/Save Presets/visible": true, "txt2img/Open TextEditor/visible": true, "customscript/lora_block_weight.py/txt2img/print change/visible": true, "customscript/lora_block_weight.py/txt2img/print change/value": false, "txt2img/Swap X/Y axes/visible": true, "txt2img/Swap Y/Z axes/visible": true, "txt2img/Swap X/Z axes/visible": true, "img2img/\ud83d\udcc2/visible": true, "img2img/Save/visible": true, "img2img/Zip/visible": true, "img2img/Send to img2img/visible": true, "img2img/Send to inpaint/visible": true, "img2img/Send to extras/visible": true, "img2img/Interrupt/visible": true, "img2img/Skip/visible": true, "img2img/Generate/visible": true, "img2img/\u267e/visible": true, "img2img/\ud83d\udd04/visible": true, "img2img/\u2199\ufe0f/visible": true, "img2img/\ud83d\uddd1\ufe0f/visible": true, "img2img/\ud83c\udfb4/visible": true, "img2img/\ud83d\udccb/visible": true, "img2img/\ud83d\udcbe/visible": true, "img2img/\ud83d\udde8/visible": true, "img2img/\ud83d\uddea/visible": true, "img2img/Tabs@img2img_extra_tabs/selected": null, "img2img/Tabs@mode_img2img/selected": null, "img2img/img2img/visible": true, "img2img/sketch/visible": true, "img2img/inpaint/visible": true, "img2img/inpaint sketch/visible": true, "img2img/Controlnet input directory/visible": true, "img2img/Controlnet input directory/value": "", "img2img/Tabs@scale_tabs/selected": null, "img2img/\u2b80/visible": true, "img2img/\ud83d\udcd0/visible": true, "img2img/Scale/visible": true, "img2img/Scale/value": 1.0, "img2img/Scale/minimum": 0.05, "img2img/Scale/maximum": 4.0, "img2img/Scale/step": 0.05, "img2img/Unused/visible": true, "img2img/Unused/value": 0, "img2img/Unused/minimum": 0, "img2img/Unused/maximum": 100, "img2img/Unused/step": 1, "img2img/\ud83c\udfb2\ufe0f/visible": true, "img2img/\u267b\ufe0f/visible": true, "img2img/\ud83d\udcdd/visible": true, "customscript/additional_networks.py/img2img/Refresh models/visible": true, "customscript/!adetailer.py/img2img/Enable ADetailer/visible": true, "customscript/!adetailer.py/img2img/Enable ADetailer/value": false, "img2img/ADetailer model/visible": true, "img2img/ADetailer model/value": "face_yolov8n.pt", "img2img/ad_prompt/visible": true, "img2img/ad_prompt/value": "", "img2img/ad_negative_prompt/visible": true, "img2img/ad_negative_prompt/value": "", "img2img/Detection model confidence threshold/visible": true, "img2img/Detection model confidence threshold/value": 0.3, "img2img/Detection model confidence threshold/minimum": 0.0, "img2img/Detection model confidence threshold/maximum": 1.0, "img2img/Detection model confidence threshold/step": 0.01, "img2img/Mask min area ratio/visible": true, "img2img/Mask min area ratio/value": 0.0, "img2img/Mask min area ratio/minimum": 0.0, "img2img/Mask min area ratio/maximum": 1.0, "img2img/Mask min area ratio/step": 0.001, "img2img/Mask max area ratio/visible": true, "img2img/Mask max area ratio/value": 1.0, "img2img/Mask max area ratio/minimum": 0.0, "img2img/Mask max area ratio/maximum": 1.0, "img2img/Mask max area ratio/step": 0.001, "img2img/Mask x(\u2192) offset/visible": true, "img2img/Mask x(\u2192) offset/value": 0, "img2img/Mask x(\u2192) offset/minimum": -200, "img2img/Mask x(\u2192) offset/maximum": 200, "img2img/Mask x(\u2192) offset/step": 1, "img2img/Mask y(\u2191) offset/visible": true, "img2img/Mask y(\u2191) offset/value": 0, "img2img/Mask y(\u2191) offset/minimum": -200, "img2img/Mask y(\u2191) offset/maximum": 200, "img2img/Mask y(\u2191) offset/step": 1, "img2img/Mask erosion (-) / dilation (+)/visible": true, "img2img/Mask erosion (-) / dilation (+)/value": 4, "img2img/Mask erosion (-) / dilation (+)/minimum": -128, "img2img/Mask erosion (-) / dilation (+)/maximum": 128, "img2img/Mask erosion (-) / dilation (+)/step": 4, "img2img/Mask merge mode/visible": true, "img2img/Mask merge mode/value": "None", "img2img/Inpaint mask blur/visible": true, "img2img/Inpaint mask blur/value": 4, "img2img/Inpaint mask blur/minimum": 0, "img2img/Inpaint mask blur/maximum": 64, "img2img/Inpaint mask blur/step": 1, "img2img/Inpaint denoising strength/visible": true, "img2img/Inpaint denoising strength/value": 0.4, "img2img/Inpaint denoising strength/minimum": 0.0, "img2img/Inpaint denoising strength/maximum": 1.0, "img2img/Inpaint denoising strength/step": 0.01, "img2img/Inpaint only masked/visible": true, "img2img/Inpaint only masked/value": true, "img2img/Inpaint only masked padding, pixels/visible": true, "img2img/Inpaint only masked padding, pixels/value": 32, "img2img/Inpaint only masked padding, pixels/minimum": 0, "img2img/Inpaint only masked padding, pixels/maximum": 256, "img2img/Inpaint only masked padding, pixels/step": 4, "img2img/Use separate width/height/visible": true, "img2img/Use separate width/height/value": false, "img2img/inpaint width/visible": true, "img2img/inpaint width/value": 512, "img2img/inpaint width/minimum": 64, "img2img/inpaint width/maximum": 2048, "img2img/inpaint width/step": 4, "img2img/inpaint height/visible": true, "img2img/inpaint height/value": 512, "img2img/inpaint height/minimum": 64, "img2img/inpaint height/maximum": 2048, "img2img/inpaint height/step": 4, "img2img/Use separate steps/visible": true, "img2img/Use separate steps/value": false, "img2img/ADetailer steps/visible": true, "img2img/ADetailer steps/value": 28, "img2img/ADetailer steps/minimum": 1, "img2img/ADetailer steps/maximum": 150, "img2img/ADetailer steps/step": 1, "img2img/Use separate CFG scale/visible": true, "img2img/Use separate CFG scale/value": false, "img2img/ADetailer CFG scale/visible": true, "img2img/ADetailer CFG scale/value": 7.0, "img2img/ADetailer CFG scale/minimum": 0.0, "img2img/ADetailer CFG scale/maximum": 30.0, "img2img/ADetailer CFG scale/step": 0.5, "img2img/Restore faces after ADetailer/visible": true, "img2img/Restore faces after ADetailer/value": false, "img2img/ControlNet model/visible": true, "img2img/ControlNet model/value": "None", "img2img/ControlNet weight/visible": true, "img2img/ControlNet weight/value": 1.0, "img2img/ControlNet weight/minimum": 0.0, "img2img/ControlNet weight/maximum": 1.0, "img2img/ControlNet weight/step": 0.01, "img2img/ControlNet guidance start/visible": true, "img2img/ControlNet guidance start/value": 0.0, "img2img/ControlNet guidance start/minimum": 0.0, "img2img/ControlNet guidance start/maximum": 1.0, "img2img/ControlNet guidance start/step": 0.01, "img2img/ControlNet guidance end/visible": true, "img2img/ControlNet guidance end/value": 1.0, "img2img/ControlNet guidance end/minimum": 0.0, "img2img/ControlNet guidance end/maximum": 1.0, "img2img/ControlNet guidance end/step": 0.01, "img2img/ADetailer model 2nd/visible": true, "img2img/ADetailer model 2nd/value": "None", "img2img/ad_prompt 2nd/visible": true, "img2img/ad_prompt 2nd/value": "", "img2img/ad_negative_prompt 2nd/visible": true, "img2img/ad_negative_prompt 2nd/value": "", "img2img/Detection model confidence threshold 2nd/visible": true, "img2img/Detection model confidence threshold 2nd/value": 0.3, "img2img/Detection model confidence threshold 2nd/minimum": 0.0, "img2img/Detection model confidence threshold 2nd/maximum": 1.0, "img2img/Detection model confidence threshold 2nd/step": 0.01, "img2img/Mask min area ratio 2nd/visible": true, "img2img/Mask min area ratio 2nd/value": 0.0, "img2img/Mask min area ratio 2nd/minimum": 0.0, "img2img/Mask min area ratio 2nd/maximum": 1.0, "img2img/Mask min area ratio 2nd/step": 0.001, "img2img/Mask max area ratio 2nd/visible": true, "img2img/Mask max area ratio 2nd/value": 1.0, "img2img/Mask max area ratio 2nd/minimum": 0.0, "img2img/Mask max area ratio 2nd/maximum": 1.0, "img2img/Mask max area ratio 2nd/step": 0.001, "img2img/Mask x(\u2192) offset 2nd/visible": true, "img2img/Mask x(\u2192) offset 2nd/value": 0, "img2img/Mask x(\u2192) offset 2nd/minimum": -200, "img2img/Mask x(\u2192) offset 2nd/maximum": 200, "img2img/Mask x(\u2192) offset 2nd/step": 1, "img2img/Mask y(\u2191) offset 2nd/visible": true, "img2img/Mask y(\u2191) offset 2nd/value": 0, "img2img/Mask y(\u2191) offset 2nd/minimum": -200, "img2img/Mask y(\u2191) offset 2nd/maximum": 200, "img2img/Mask y(\u2191) offset 2nd/step": 1, "img2img/Mask erosion (-) / dilation (+) 2nd/visible": true, "img2img/Mask erosion (-) / dilation (+) 2nd/value": 4, "img2img/Mask erosion (-) / dilation (+) 2nd/minimum": -128, "img2img/Mask erosion (-) / dilation (+) 2nd/maximum": 128, "img2img/Mask erosion (-) / dilation (+) 2nd/step": 4, "img2img/Mask merge mode 2nd/visible": true, "img2img/Mask merge mode 2nd/value": "None", "img2img/Inpaint mask blur 2nd/visible": true, "img2img/Inpaint mask blur 2nd/value": 4, "img2img/Inpaint mask blur 2nd/minimum": 0, "img2img/Inpaint mask blur 2nd/maximum": 64, "img2img/Inpaint mask blur 2nd/step": 1, "img2img/Inpaint denoising strength 2nd/visible": true, "img2img/Inpaint denoising strength 2nd/value": 0.4, "img2img/Inpaint denoising strength 2nd/minimum": 0.0, "img2img/Inpaint denoising strength 2nd/maximum": 1.0, "img2img/Inpaint denoising strength 2nd/step": 0.01, "img2img/Inpaint only masked 2nd/visible": true, "img2img/Inpaint only masked 2nd/value": true, "img2img/Inpaint only masked padding, pixels 2nd/visible": true, "img2img/Inpaint only masked padding, pixels 2nd/value": 32, "img2img/Inpaint only masked padding, pixels 2nd/minimum": 0, "img2img/Inpaint only masked padding, pixels 2nd/maximum": 256, "img2img/Inpaint only masked padding, pixels 2nd/step": 4, "img2img/Use separate width/height 2nd/visible": true, "img2img/Use separate width/height 2nd/value": false, "img2img/inpaint width 2nd/visible": true, "img2img/inpaint width 2nd/value": 512, "img2img/inpaint width 2nd/minimum": 64, "img2img/inpaint width 2nd/maximum": 2048, "img2img/inpaint width 2nd/step": 4, "img2img/inpaint height 2nd/visible": true, "img2img/inpaint height 2nd/value": 512, "img2img/inpaint height 2nd/minimum": 64, "img2img/inpaint height 2nd/maximum": 2048, "img2img/inpaint height 2nd/step": 4, "img2img/Use separate steps 2nd/visible": true, "img2img/Use separate steps 2nd/value": false, "img2img/ADetailer steps 2nd/visible": true, "img2img/ADetailer steps 2nd/value": 28, "img2img/ADetailer steps 2nd/minimum": 1, "img2img/ADetailer steps 2nd/maximum": 150, "img2img/ADetailer steps 2nd/step": 1, "img2img/Use separate CFG scale 2nd/visible": true, "img2img/Use separate CFG scale 2nd/value": false, "img2img/ADetailer CFG scale 2nd/visible": true, "img2img/ADetailer CFG scale 2nd/value": 7.0, "img2img/ADetailer CFG scale 2nd/minimum": 0.0, "img2img/ADetailer CFG scale 2nd/maximum": 30.0, "img2img/ADetailer CFG scale 2nd/step": 0.5, "img2img/Restore faces after ADetailer 2nd/visible": true, "img2img/Restore faces after ADetailer 2nd/value": false, "img2img/ControlNet model 2nd/visible": true, "img2img/ControlNet model 2nd/value": "None", "img2img/ControlNet weight 2nd/visible": true, "img2img/ControlNet weight 2nd/value": 1.0, "img2img/ControlNet weight 2nd/minimum": 0.0, "img2img/ControlNet weight 2nd/maximum": 1.0, "img2img/ControlNet weight 2nd/step": 0.01, "img2img/ControlNet guidance start 2nd/visible": true, "img2img/ControlNet guidance start 2nd/value": 0.0, "img2img/ControlNet guidance start 2nd/minimum": 0.0, "img2img/ControlNet guidance start 2nd/maximum": 1.0, "img2img/ControlNet guidance start 2nd/step": 0.01, "img2img/ControlNet guidance end 2nd/visible": true, "img2img/ControlNet guidance end 2nd/value": 1.0, "img2img/ControlNet guidance end 2nd/minimum": 0.0, "img2img/ControlNet guidance end 2nd/maximum": 1.0, "img2img/ControlNet guidance end 2nd/step": 0.01, "img2img/Input Directory/visible": true, "img2img/Input Directory/value": "", "img2img/New Canvas Width/visible": true, "img2img/New Canvas Width/value": 512, "img2img/New Canvas Width/minimum": 256, "img2img/New Canvas Width/maximum": 1024, "img2img/New Canvas Width/step": 64, "img2img/New Canvas Height/visible": true, "img2img/New Canvas Height/value": 512, "img2img/New Canvas Height/minimum": 256, "img2img/New Canvas Height/maximum": 1024, "img2img/New Canvas Height/step": 64, "img2img/Create New Canvas/visible": true, "img2img/Cancel/visible": true, "img2img/Pixel Perfect/visible": true, "img2img/Pixel Perfect/value": false, "img2img/Allow Preview/visible": true, "img2img/Allow Preview/value": false, "img2img/Control Type/visible": true, "img2img/Control Type/value": "All", "img2img/Control Weight/visible": true, "img2img/Control Weight/value": 1.0, "img2img/Control Weight/minimum": 0.0, "img2img/Control Weight/maximum": 2.0, "img2img/Control Weight/step": 0.05, "img2img/Starting Control Step/visible": true, "img2img/Starting Control Step/value": 0.0, "img2img/Starting Control Step/minimum": 0.0, "img2img/Starting Control Step/maximum": 1.0, "img2img/Starting Control Step/step": 0.01, "img2img/Ending Control Step/visible": true, "img2img/Ending Control Step/value": 1.0, "img2img/Ending Control Step/minimum": 0.0, "img2img/Ending Control Step/maximum": 1.0, "img2img/Ending Control Step/step": 0.01, "img2img/Preprocessor resolution/value": 512, "img2img/Preprocessor resolution/minimum": 64, "img2img/Preprocessor resolution/maximum": 2048, "img2img/Preprocessor resolution/step": 10, "img2img/Control Mode/visible": true, "img2img/Control Mode/value": "Balanced", "img2img/[Loopback] Automatically send generated images to this ControlNet unit/visible": true, "img2img/[Loopback] Automatically send generated images to this ControlNet unit/value": false, "customscript/lora_block_weight.py/img2img/diff image color/visible": true, "customscript/lora_block_weight.py/img2img/diff image color/value": "black", "customscript/lora_block_weight.py/img2img/change X-Y/visible": true, "customscript/lora_block_weight.py/img2img/change X-Y/value": false, "customscript/lora_block_weight.py/img2img/difference threshold/visible": true, "customscript/lora_block_weight.py/img2img/difference threshold/value": "20", "customscript/lora_block_weight.py/img2img/number of seed/visible": true, "customscript/lora_block_weight.py/img2img/number of seed/value": 1.0, "img2img/Reload Presets/visible": true, "img2img/Reload Tags/visible": true, "img2img/Save Presets/visible": true, "img2img/Open TextEditor/visible": true, "customscript/lora_block_weight.py/img2img/print change/visible": true, "customscript/lora_block_weight.py/img2img/print change/value": false, "img2img/Swap X/Y axes/visible": true, "img2img/Swap Y/Z axes/visible": true, "img2img/Swap X/Z axes/visible": true, "extras/\ud83d\udcc2/visible": true, "extras/Save/visible": true, "extras/Zip/visible": true, "extras/Send to img2img/visible": true, "extras/Send to inpaint/visible": true, "extras/Send to extras/visible": true, "extras/Upscale/visible": true, "extras/Tabs@mode_extras/selected": null, "extras/Tabs@extras_resize_mode/selected": null, "customscript/postprocessing_upscale.py/extras/Width/minimum": 64, "customscript/postprocessing_upscale.py/extras/Width/maximum": 2048, "customscript/postprocessing_upscale.py/extras/Width/step": 8, "extras/\u2b80/visible": true, "customscript/postprocessing_upscale.py/extras/Height/minimum": 64, "customscript/postprocessing_upscale.py/extras/Height/maximum": 2048, "customscript/postprocessing_upscale.py/extras/Height/step": 8, "pnginfo/Send to txt2img/visible": true, "pnginfo/Send to img2img/visible": true, "pnginfo/Send to inpaint/visible": true, "pnginfo/Send to extras/visible": true, "modelmerger/Merge/visible": true, "modelmerger/\ud83d\udd04/visible": true, "modelmerger/Save metadata (.safetensors only)/visible": true, "modelmerger/Save metadata (.safetensors only)/value": true, "train/Tabs@train_tabs_2img_settings/selected": null, "train/Create embedding/visible": true, "train/Create hypernetwork/visible": true, "train/Keep original size/visible": true, "train/Keep original size/value": false, "train/Interrupt/visible": true, "train/Preprocess/visible": true, "train/\ud83d\udd04/visible": true, "train/Value/visible": true, "train/Value/value": "0.1", "train/Train Embedding/visible": true, "train/Train Hypernetwork/visible": true, "civitai_helper/Download Max Size Preview/visible": true, "civitai_helper/Download Max Size Preview/value": true, "civitai_helper/Skip NSFW Preview Images/visible": true, "civitai_helper/Skip NSFW Preview Images/value": false, "civitai_helper/Scan/visible": true, "civitai_helper/Model Type/visible": true, "civitai_helper/Model Type/value": "ckp", "civitai_helper/Only Show Models have no Info/visible": true, "civitai_helper/Only Show Models have no Info/value": false, "civitai_helper/Model/visible": true, "civitai_helper/Model/value": "ckp", "civitai_helper/Civitai URL/visible": true, "civitai_helper/Civitai URL/value": "", "civitai_helper/Get Model Info from Civitai/visible": true, "civitai_helper/1. Get Model Info by Civitai Url/visible": true, "civitai_helper/Model Name/visible": true, "civitai_helper/Model Name/value": "", "civitai_helper/Sub-folder/visible": true, "civitai_helper/Sub-folder/value": "", "civitai_helper/Model Version/visible": true, "civitai_helper/Model Version/value": "", "civitai_helper/Download All files/visible": true, "civitai_helper/Download All files/value": false, "civitai_helper/3. Download Model/visible": true, "civitai_helper/Check New Version from Civitai/visible": true, "civitai_helper/Open Url At Client Side/visible": true, "civitai_helper/Open Url At Client Side/value": true, "civitai_helper/Always Display Buttons/visible": true, "civitai_helper/Always Display Buttons/value": false, "civitai_helper/Show Button On Thumb Mode/visible": true, "civitai_helper/Show Button On Thumb Mode/value": true, "civitai_helper/Proxy/visible": true, "civitai_helper/Proxy/value": "", "civitai_helper/Save Setting/visible": true, "civitai_helper/Request Msg From Js/value": "", "civitai_helper/Response Msg From Python/value": "", "additional_networks/Model path filter/visible": true, "additional_networks/Model path filter/value": "", "additional_networks/Network module/visible": true, "additional_networks/Network module/value": "LoRA", "additional_networks/Model/visible": true, "additional_networks/Model/value": "None", "additional_networks/\ud83d\udd04/visible": true, "additional_networks/Model hash/visible": true, "additional_networks/Model hash/value": "", "additional_networks/Legacy hash/visible": true, "additional_networks/Legacy hash/value": "", "additional_networks/Model path/visible": true, "additional_networks/Model path/value": "", "additional_networks/\ud83d\udcc2/visible": true, "additional_networks/1\ufe0f\u20e3/visible": true, "additional_networks/2\ufe0f\u20e3/visible": true, "additional_networks/3\ufe0f\u20e3/visible": true, "additional_networks/4\ufe0f\u20e3/visible": true, "additional_networks/5\ufe0f\u20e3/visible": true, "additional_networks/Containing directory/visible": true, "additional_networks/Containing directory/value": "", "additional_networks/Only copy to models with same session ID/visible": true, "additional_networks/Only copy to models with same session ID/value": true, "additional_networks/Only copy to models with no metadata/visible": true, "additional_networks/Only copy to models with no metadata/value": true, "additional_networks/Copy Metadata/visible": true, "additional_networks/Name/visible": true, "additional_networks/Name/value": "", "additional_networks/Author/visible": true, "additional_networks/Author/value": "", "additional_networks/Keywords/visible": true, "additional_networks/Keywords/value": "", "additional_networks/Description/visible": true, "additional_networks/Description/value": "", "additional_networks/Source/visible": true, "additional_networks/Source/value": "", "additional_networks/Rating/visible": true, "additional_networks/Rating/value": 0, "additional_networks/Rating/minimum": 0, "additional_networks/Rating/maximum": 10, "additional_networks/Rating/step": 1, "additional_networks/Tags/visible": true, "additional_networks/Tags/value": "", "additional_networks/Editing Enabled/visible": true, "additional_networks/Editing Enabled/value": false, "additional_networks/Save Metadata/visible": true, "additional_networks/Send to txt2img/visible": true, "additional_networks/Send to img2img/visible": true, "additional_networks/Send to inpaint/visible": true, "additional_networks/Send to extras/visible": true, "additional_networks/Generate Info/visible": true, "additional_networks/Generate Info/value": "", "supermerger/Model A/visible": true, "supermerger/Model A/value": null, "supermerger/\ud83d\udd04/visible": true, "supermerger/Model B/visible": true, "supermerger/Model B/value": null, "supermerger/Model C/visible": true, "supermerger/Model C/value": null, "supermerger/Merge Mode/visible": true, "supermerger/Merge Mode/value": "Weight sum:A*(1-alpha)+B*alpha", "supermerger/Calcutation Mode/visible": true, "supermerger/Calcutation Mode/value": "normal", "supermerger/use MBW/visible": true, "supermerger/use MBW/value": false, "supermerger/alpha/visible": true, "supermerger/alpha/value": 0.5, "supermerger/alpha/minimum": -1.0, "supermerger/alpha/maximum": 2, "supermerger/alpha/step": 0.001, "supermerger/beta/visible": true, "supermerger/beta/value": 0.25, "supermerger/beta/minimum": -1.0, "supermerger/beta/maximum": 2, "supermerger/beta/step": 0.001, "supermerger/Merge!/visible": true, "supermerger/Merge&Gen/visible": true, "supermerger/Gen/visible": true, "supermerger/Stop/visible": true, "supermerger/Custom Name (Optional)/visible": true, "supermerger/Custom Name (Optional)/value": "", "supermerger/merge from ID/visible": true, "supermerger/merge from ID/value": "-1", "supermerger/Set from ID(-1 for last)/visible": true, "supermerger/Batch size/visible": true, "supermerger/Batch size/value": 1, "supermerger/Batch size/minimum": 0, "supermerger/Batch size/maximum": 8, "supermerger/Batch size/step": 1, "supermerger/Upscaler/visible": true, "supermerger/Upscaler/value": "Latent", "supermerger/Hires steps/visible": true, "supermerger/Hires steps/value": 0, "supermerger/Hires steps/minimum": 0, "supermerger/Hires steps/maximum": 150, "supermerger/Hires steps/step": 1, "supermerger/Denoising strength/visible": true, "supermerger/Denoising strength/value": 0.7, "supermerger/Denoising strength/minimum": 0.0, "supermerger/Denoising strength/maximum": 1.0, "supermerger/Denoising strength/step": 0.01, "supermerger/Upscale by/visible": true, "supermerger/Upscale by/value": 2.0, "supermerger/Upscale by/minimum": 1.0, "supermerger/Upscale by/maximum": 4.0, "supermerger/Upscale by/step": 0.05, "supermerger/Blocks:Element:Ratio,Blocks:Element:Ratio,.../visible": true, "supermerger/Blocks:Element:Ratio,Blocks:Element:Ratio,.../value": "", "supermerger/Blocks:Tensors/visible": true, "supermerger/Blocks:Tensors/value": "", "supermerger/X type/visible": true, "supermerger/X type/value": "alpha", "supermerger/number of -1/visible": true, "supermerger/number of -1/value": 3.0, "supermerger/Sequential Merge Parameters/visible": true, "supermerger/Sequential Merge Parameters/value": "0.25,0.5,0.75", "supermerger/Y type/visible": true, "supermerger/Y type/value": "none", "supermerger/Y grid (Disabled if blank)/value": "", "supermerger/Sequential XY Merge and Generation/visible": true, "supermerger/Stop XY/visible": true, "supermerger/Reserve XY Plot/visible": true, "supermerger/Current Model/visible": true, "supermerger/Current Model/value": "", "supermerger/Message/visible": true, "supermerger/Message/value": "", "supermerger/\ud83d\udcc2/visible": true, "supermerger/Save/visible": true, "supermerger/Zip/visible": true, "supermerger/Send to img2img/visible": true, "supermerger/Send to inpaint/visible": true, "supermerger/Send to extras/visible": true, "supermerger//visible": true, "supermerger//value": "", "supermerger/Add to Sequence X/visible": true, "supermerger/Add to Sequence Y/visible": true, "supermerger/set to alpha/visible": true, "supermerger/read from alpha/visible": true, "supermerger/set to beta/visible": true, "supermerger/read from beta/visible": true, "supermerger/set to X/visible": true, "supermerger/weights for alpha, base alpha,IN00,IN02,...IN11,M00,OUT00,...,OUT11/visible": true, "supermerger/weights for alpha, base alpha,IN00,IN02,...IN11,M00,OUT00,...,OUT11/value": "0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5", "supermerger/weights,for beta, base beta,IN00,IN02,...IN11,M00,OUT00,...,OUT11/visible": true, "supermerger/weights,for beta, base beta,IN00,IN02,...IN11,M00,OUT00,...,OUT11/value": "0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2", "supermerger/Base/visible": true, "supermerger/Base/value": 0.5, "supermerger/Base/minimum": 0, "supermerger/Base/maximum": 1, "supermerger/Base/step": 0.01, "supermerger/IN00/visible": true, "supermerger/IN00/value": 0.5, "supermerger/IN00/minimum": 0, "supermerger/IN00/maximum": 1, "supermerger/IN00/step": 0.01, "supermerger/IN01/visible": true, "supermerger/IN01/value": 0.5, "supermerger/IN01/minimum": 0, "supermerger/IN01/maximum": 1, "supermerger/IN01/step": 0.01, "supermerger/IN02/visible": true, "supermerger/IN02/value": 0.5, "supermerger/IN02/minimum": 0, "supermerger/IN02/maximum": 1, "supermerger/IN02/step": 0.01, "supermerger/IN03/visible": true, "supermerger/IN03/value": 0.5, "supermerger/IN03/minimum": 0, "supermerger/IN03/maximum": 1, "supermerger/IN03/step": 0.01, "supermerger/IN04/visible": true, "supermerger/IN04/value": 0.5, "supermerger/IN04/minimum": 0, "supermerger/IN04/maximum": 1, "supermerger/IN04/step": 0.01, "supermerger/IN05/visible": true, "supermerger/IN05/value": 0.5, "supermerger/IN05/minimum": 0, "supermerger/IN05/maximum": 1, "supermerger/IN05/step": 0.01, "supermerger/IN06/visible": true, "supermerger/IN06/value": 0.5, "supermerger/IN06/minimum": 0, "supermerger/IN06/maximum": 1, "supermerger/IN06/step": 0.01, "supermerger/IN07/visible": true, "supermerger/IN07/value": 0.5, "supermerger/IN07/minimum": 0, "supermerger/IN07/maximum": 1, "supermerger/IN07/step": 0.01, "supermerger/IN08/visible": true, "supermerger/IN08/value": 0.5, "supermerger/IN08/minimum": 0, "supermerger/IN08/maximum": 1, "supermerger/IN08/step": 0.01, "supermerger/IN09/visible": true, "supermerger/IN09/value": 0.5, "supermerger/IN09/minimum": 0, "supermerger/IN09/maximum": 1, "supermerger/IN09/step": 0.01, "supermerger/IN10/visible": true, "supermerger/IN10/value": 0.5, "supermerger/IN10/minimum": 0, "supermerger/IN10/maximum": 1, "supermerger/IN10/step": 0.01, "supermerger/IN11/visible": true, "supermerger/IN11/value": 0.5, "supermerger/IN11/minimum": 0, "supermerger/IN11/maximum": 1, "supermerger/IN11/step": 0.01, "supermerger/M00/visible": true, "supermerger/M00/value": 0.5, "supermerger/M00/minimum": 0, "supermerger/M00/maximum": 1, "supermerger/M00/step": 0.01, "supermerger/OUT00/visible": true, "supermerger/OUT00/value": 0.5, "supermerger/OUT00/minimum": 0, "supermerger/OUT00/maximum": 1, "supermerger/OUT00/step": 0.01, "supermerger/OUT01/visible": true, "supermerger/OUT01/value": 0.5, "supermerger/OUT01/minimum": 0, "supermerger/OUT01/maximum": 1, "supermerger/OUT01/step": 0.01, "supermerger/OUT02/visible": true, "supermerger/OUT02/value": 0.5, "supermerger/OUT02/minimum": 0, "supermerger/OUT02/maximum": 1, "supermerger/OUT02/step": 0.01, "supermerger/OUT03/visible": true, "supermerger/OUT03/value": 0.5, "supermerger/OUT03/minimum": 0, "supermerger/OUT03/maximum": 1, "supermerger/OUT03/step": 0.01, "supermerger/OUT04/visible": true, "supermerger/OUT04/value": 0.5, "supermerger/OUT04/minimum": 0, "supermerger/OUT04/maximum": 1, "supermerger/OUT04/step": 0.01, "supermerger/OUT05/visible": true, "supermerger/OUT05/value": 0.5, "supermerger/OUT05/minimum": 0, "supermerger/OUT05/maximum": 1, "supermerger/OUT05/step": 0.01, "supermerger/OUT06/visible": true, "supermerger/OUT06/value": 0.5, "supermerger/OUT06/minimum": 0, "supermerger/OUT06/maximum": 1, "supermerger/OUT06/step": 0.01, "supermerger/OUT07/visible": true, "supermerger/OUT07/value": 0.5, "supermerger/OUT07/minimum": 0, "supermerger/OUT07/maximum": 1, "supermerger/OUT07/step": 0.01, "supermerger/OUT08/visible": true, "supermerger/OUT08/value": 0.5, "supermerger/OUT08/minimum": 0, "supermerger/OUT08/maximum": 1, "supermerger/OUT08/step": 0.01, "supermerger/OUT09/visible": true, "supermerger/OUT09/value": 0.5, "supermerger/OUT09/minimum": 0, "supermerger/OUT09/maximum": 1, "supermerger/OUT09/step": 0.01, "supermerger/OUT10/visible": true, "supermerger/OUT10/value": 0.5, "supermerger/OUT10/minimum": 0, "supermerger/OUT10/maximum": 1, "supermerger/OUT10/step": 0.01, "supermerger/OUT11/visible": true, "supermerger/OUT11/value": 0.5, "supermerger/OUT11/minimum": 0, "supermerger/OUT11/maximum": 1, "supermerger/OUT11/step": 0.01, "supermerger/Reload Presets/visible": true, "supermerger/Reload Tags/visible": true, "supermerger/Save Presets/visible": true, "supermerger/Open TextEditor/visible": true, "supermerger/available/visible": true, "supermerger/available/value": "GRAD_V,GRAD_A,FLAT_25,FLAT_75,WRAP08,WRAP12,WRAP14,WRAP16,MID12_50,OUT07,OUT12,OUT12_5,RING08_SOFT,RING08_5,RING10_5,RING10_3,SMOOTHSTEP,REVERSE-SMOOTHSTEP,SMOOTHSTEP*2,R_SMOOTHSTEP*2,SMOOTHSTEP*3,R_SMOOTHSTEP*3,SMOOTHSTEP*4,R_SMOOTHSTEP*4,SMOOTHSTEP/2,R_SMOOTHSTEP/2,SMOOTHSTEP/3,R_SMOOTHSTEP/3,SMOOTHSTEP/4,R_SMOOTHSTEP/4,COSINE,REVERSE_COSINE,TRUE_CUBIC_HERMITE,TRUE_REVERSE_CUBIC_HERMITE,FAKE_CUBIC_HERMITE,FAKE_REVERSE_CUBIC_HERMITE,ALL_A,ALL_B", "supermerger/Reloat List/visible": true, "supermerger/Start XY plot/visible": true, "supermerger/Delete list(-1 for all)/visible": true, "supermerger/Delete num : /visible": true, "supermerger/Delete num : /value": 1, "supermerger/Current Cache/visible": true, "supermerger/Current Cache/value": "", "supermerger/Reload Cache List/visible": true, "supermerger/unload model/visible": true, "supermerger/Merge to Checkpoint/visible": true, "supermerger/Make LoRA (alpha * A - beta * B)/visible": true, "supermerger/Checkpoint A/visible": true, "supermerger/Checkpoint A/value": null, "supermerger/Checkpoint B/visible": true, "supermerger/Checkpoint B/value": null, "supermerger/Merge LoRAs/visible": true, "supermerger/save precision/visible": true, "supermerger/save precision/value": "fp16", "supermerger/remake dimension/visible": true, "supermerger/remake dimension/value": "no", "supermerger/filename(option)/visible": true, "supermerger/filename(option)/value": "", "supermerger/LoRAname1:ratio1:Blocks1,LoRAname2:ratio2:Blocks2,...(\":blocks\" is option, not necessary)/visible": true, "supermerger/LoRAname1:ratio1:Blocks1,LoRAname2:ratio2:Blocks2,...(\":blocks\" is option, not necessary)/value": "", "supermerger/calculate dimension of LoRAs(It may take a few minutes if there are many LoRAs)/visible": true, "supermerger/update list/visible": true, "supermerger/load_history/visible": true, "supermerger/search/visible": true, "supermerger/Search Mode/visible": true, "supermerger/Search Mode/value": "or", "supermerger/load keys/visible": true, "supermerger/read metadata/visible": true, "supermerger/read metadata/value": null, "image_browser/Tabs@image_browser_tabs_container/selected": null, "image_browser/Images directorytxt2img/visible": true, "image_browser/Images directorytxt2img/value": "/content/microsoftexcel/outputs/txt2img-images", "image_browser/Sub directory depth/visible": true, "image_browser/Sub directory depth/value": 0.0, "image_browser/Add to / replace in saved directories/visible": true, "image_browser/Saved directories/visible": true, "image_browser/Saved directories/value": null, "image_browser/Remove from saved directories/visible": true, "image_browser/Sub directories/visible": true, "image_browser/Sub directories/value": "Nothing selected", "image_browser/Get sub directories/visible": true, "image_browser/First Page/visible": true, "image_browser/Prev Page/visible": true, "image_browser/Page Index/visible": true, "image_browser/Page Index/value": 1.0, "image_browser/\ud83d\udd04/visible": true, "image_browser/Next Page/visible": true, "image_browser/End Page/visible": true, "image_browser/Current ranking/visible": true, "image_browser/Current ranking/value": "None", "image_browser/Set ranking to/visible": true, "image_browser/Set ranking to/value": null, "image_browser/delete next/visible": true, "image_browser/delete next/value": 1.0, "image_browser/also delete off-screen images/visible": true, "image_browser/also delete off-screen images/value": false, "image_browser/Delete/visible": true, "image_browser/Sort by/visible": true, "image_browser/Sort by/value": "date", "image_browser/\u25bc/visible": true, "image_browser/Filename keyword search/visible": true, "image_browser/Filename keyword search/value": "", "image_browser/EXIF keyword search/visible": true, "image_browser/EXIF keyword search/value": "", "image_browser/Search negative prompt/visible": true, "image_browser/Search negative prompt/value": "No", "image_browser/case sensitive/visible": true, "image_browser/case sensitive/value": false, "image_browser/regex - e.g. ^(?!.*Hires).*$/visible": true, "image_browser/regex - e.g. ^(?!.*Hires).*$/value": false, "image_browser/Ranking filter/visible": true, "image_browser/Ranking filter/value": "All", "image_browser/Minimum ranking/visible": true, "image_browser/Minimum ranking/value": "1", "image_browser/Maximum ranking/visible": true, "image_browser/Maximum ranking/value": "5", "image_browser//visible": true, "image_browser//value": "Choose Min-max to activate these controls", "image_browser/Scoring type/visible": true, "image_browser/Scoring type/value": "aesthetic_score", "image_browser/Minimum score/visible": true, "image_browser/Minimum score/value": "", "image_browser/Maximum score/visible": true, "image_browser/Maximum score/value": "", "image_browser/Generation Info/visible": true, "image_browser/Generation Info/value": "", "image_browser/File Name/visible": true, "image_browser/File Name/value": "", "image_browser/\ud83d\udcc2/visible": true, "image_browser/Move to favorites/visible": true, "image_browser/Send to txt2img/visible": true, "image_browser/Send to img2img/visible": true, "image_browser/Send to inpaint/visible": true, "image_browser/Send to extras/visible": true, "image_browser/Send to txt2img ControlNet/visible": true, "image_browser/Send to img2img ControlNet/visible": true, "image_browser/ControlNet number/value": "0", "image_browser/Directory path/visible": true, "image_browser/Directory path/value": "", "image_browser/Move to directory/visible": true, "image_browser/Renew Page/visible": true, "image_browser/set_index/visible": true, "image_browser/load_switch/visible": true, "image_browser/load_switch/value": "load_switch", "image_browser/to_dir_load_switch/visible": true, "image_browser/to_dir_load_switch/value": "to dir load_switch", "image_browser/turn_page_switch/visible": true, "image_browser/turn_page_switch/value": 1.0, "image_browser/Last message/visible": true, "image_browser/Last message/value": "", "image_browser/Rebuild exif cache/visible": true, "image_browser/Delete 0-entries from exif cache/visible": true, "image_browser/Update directory names in database/visible": true, "image_browser/From (full path)/visible": true, "image_browser/From (full path)/value": "", "image_browser/to (full path)/visible": true, "image_browser/to (full path)/value": "", "image_browser/Reapply ranking after moving files/visible": true, "image_browser/Restart debug/visible": true, "image_browser/Get javascript logs/visible": true, "image_browser/Javascript logs/visible": true, "image_browser/Javascript logs/value": "", "image_browser/Rebuild ranking from exif info/visible": true, "image_browser/Images directoryimg2img/visible": true, "image_browser/Images directoryimg2img/value": "/content/microsoftexcel/outputs/img2img-images", "image_browser/Images directorytxt2img-grids/visible": true, "image_browser/Images directorytxt2img-grids/value": "/content/microsoftexcel/outputs/txt2img-grids", "image_browser/Images directoryimg2img-grids/visible": true, "image_browser/Images directoryimg2img-grids/value": "/content/microsoftexcel/outputs/img2img-grids", "image_browser/Images directoryExtras/visible": true, "image_browser/Images directoryExtras/value": "/content/microsoftexcel/outputs/extras-images", "image_browser/Images directoryFavorites/visible": true, "image_browser/Images directoryFavorites/value": "/content/microsoftexcel/log/images", "image_browser/Images directory/visible": true, "image_browser/Images directory/value": null, "image_browser/Images directoryMaintenance/visible": true, "image_browser/Images directoryMaintenance/value": null, "ui_theme/Themes/visible": true, "ui_theme/Themes/value": null, "ui_theme/Save / Save as/visible": true, "ui_theme/Save / Save as/value": "", "ui_theme/Reset/visible": true, "ui_theme/Save/visible": true, "ui_theme/Vars/visible": true, "ui_theme/Vars/value": "", "ui_theme/Css/visible": true, "ui_theme/Css/value": "", "ui_theme/Hue/visible": true, "ui_theme/Hue/value": 0, "ui_theme/Hue/minimum": 0, "ui_theme/Hue/maximum": 360, "ui_theme/Hue/step": 1, "ui_theme/Saturation/visible": true, "ui_theme/Saturation/value": 0, "ui_theme/Saturation/minimum": -100, "ui_theme/Saturation/maximum": 100, "ui_theme/Saturation/step": 1, "ui_theme/Lightness/visible": true, "ui_theme/Lightness/value": 0, "ui_theme/Lightness/minimum": -50, "ui_theme/Lightness/maximum": 50, "ui_theme/Lightness/step": 1, "ui_theme/Invert/visible": true, "ui_theme/Gap size/visible": true, "ui_theme/Gap size/value": 1, "ui_theme/Gap size/minimum": 1, "ui_theme/Gap size/maximum": 16, "ui_theme/Gap size/step": 1, "ui_theme/Padding size/visible": true, "ui_theme/Padding size/value": 1, "ui_theme/Padding size/minimum": 1, "ui_theme/Padding size/maximum": 16, "ui_theme/Padding size/step": 1, "ui_theme/Mobile Gap size/visible": true, "ui_theme/Mobile Gap size/value": 1, "ui_theme/Mobile Gap size/minimum": 1, "ui_theme/Mobile Gap size/maximum": 16, "ui_theme/Mobile Gap size/step": 1, "ui_theme/Mobile Padding size/visible": true, "ui_theme/Mobile Padding size/value": 1, "ui_theme/Mobile Padding size/minimum": 1, "ui_theme/Mobile Padding size/maximum": 16, "ui_theme/Mobile Padding size/step": 1, "ui_theme/Border radius/visible": true, "ui_theme/Border radius/value": 0, "ui_theme/Border radius/minimum": 0, "ui_theme/Border radius/maximum": 16, "ui_theme/Border radius/step": 1, "settings/Apply settings/visible": true, "settings/Reload UI/visible": true, "settings/Tabs@settings/selected": null, "settings/Always save all generated images/visible": true, "settings/Always save all generated images/value": true, "settings//visible": true, "settings//value": false, "settings/File format for images/visible": true, "settings/File format for images/value": "png", "settings/Images filename pattern/visible": true, "settings/Images filename pattern/value": "", "settings/Add number to filename when saving/visible": true, "settings/Add number to filename when saving/value": true, "settings/Always save all generated image grids/visible": true, "settings/Always save all generated image grids/value": true, "settings/File format for grids/visible": true, "settings/File format for grids/value": "png", "settings/Add extended info (seed, prompt) to filename when saving grid/visible": true, "settings/Add extended info (seed, prompt) to filename when saving grid/value": false, "settings/Do not save grids consisting of one picture/visible": true, "settings/Do not save grids consisting of one picture/value": true, "settings/Prevent empty spots in grid (when set to autodetect)/visible": true, "settings/Prevent empty spots in grid (when set to autodetect)/value": false, "settings/Grid row count; use -1 for autodetect and 0 for it to be same as batch size/visible": true, "settings/Grid row count; use -1 for autodetect and 0 for it to be same as batch size/value": -1, "settings/Grid row count; use -1 for autodetect and 0 for it to be same as batch size/minimum": -1, "settings/Grid row count; use -1 for autodetect and 0 for it to be same as batch size/maximum": 16, "settings/Grid row count; use -1 for autodetect and 0 for it to be same as batch size/step": 1, "settings/Save text information about generation parameters as chunks to png files/visible": true, "settings/Save text information about generation parameters as chunks to png files/value": true, "settings/Create a text file next to every image with generation parameters./visible": true, "settings/Create a text file next to every image with generation parameters./value": false, "settings/Save a copy of image before doing face restoration./visible": true, "settings/Save a copy of image before doing face restoration./value": false, "settings/Save a copy of image before applying highres fix./visible": true, "settings/Save a copy of image before applying highres fix./value": false, "settings/Save a copy of image before applying color correction to img2img results/visible": true, "settings/Save a copy of image before applying color correction to img2img results/value": false, "settings/For inpainting, save a copy of the greyscale mask/visible": true, "settings/For inpainting, save a copy of the greyscale mask/value": false, "settings/For inpainting, save a masked composite/visible": true, "settings/For inpainting, save a masked composite/value": false, "settings/Quality for saved jpeg images/visible": true, "settings/Quality for saved jpeg images/value": 80, "settings/Quality for saved jpeg images/minimum": 1, "settings/Quality for saved jpeg images/maximum": 100, "settings/Quality for saved jpeg images/step": 1, "settings/Use lossless compression for webp images/visible": true, "settings/Use lossless compression for webp images/value": false, "settings/Save copy of large images as JPG/visible": true, "settings/Save copy of large images as JPG/value": true, "settings/File size limit for the above option, MB/visible": true, "settings/File size limit for the above option, MB/value": 4.0, "settings/Width/height limit for the above option, in pixels/visible": true, "settings/Width/height limit for the above option, in pixels/value": 4000.0, "settings/Maximum image size/visible": true, "settings/Maximum image size/value": 200.0, "settings/Use original name for output filename during batch process in extras tab/visible": true, "settings/Use original name for output filename during batch process in extras tab/value": true, "settings/Use upscaler name as filename suffix in the extras tab/visible": true, "settings/Use upscaler name as filename suffix in the extras tab/value": false, "settings/When using 'Save' button, only save a single selected image/visible": true, "settings/When using 'Save' button, only save a single selected image/value": true, "settings/Save init images when using img2img/visible": true, "settings/Save init images when using img2img/value": false, "settings/Directory for temporary images; leave empty for default/visible": true, "settings/Directory for temporary images; leave empty for default/value": "", "settings/Cleanup non-default temporary directory when starting webui/visible": true, "settings/Cleanup non-default temporary directory when starting webui/value": false, "settings/Output directory for images; if empty, defaults to three directories below/visible": true, "settings/Output directory for images; if empty, defaults to three directories below/value": "", "settings/Output directory for txt2img images/visible": true, "settings/Output directory for txt2img images/value": "outputs/txt2img-images", "settings/Output directory for img2img images/visible": true, "settings/Output directory for img2img images/value": "outputs/img2img-images", "settings/Output directory for images from extras tab/visible": true, "settings/Output directory for images from extras tab/value": "outputs/extras-images", "settings/Output directory for grids; if empty, defaults to two directories below/visible": true, "settings/Output directory for grids; if empty, defaults to two directories below/value": "", "settings/Output directory for txt2img grids/visible": true, "settings/Output directory for txt2img grids/value": "outputs/txt2img-grids", "settings/Output directory for img2img grids/visible": true, "settings/Output directory for img2img grids/value": "outputs/img2img-grids", "settings/Directory for saving images using the Save button/visible": true, "settings/Directory for saving images using the Save button/value": "log/images", "settings/Directory for saving init images when using img2img/visible": true, "settings/Directory for saving init images when using img2img/value": "outputs/init-images", "settings/Save images to a subdirectory/visible": true, "settings/Save images to a subdirectory/value": true, "settings/Save grids to a subdirectory/visible": true, "settings/Save grids to a subdirectory/value": true, "settings/When using \"Save\" button, save images to a subdirectory/visible": true, "settings/When using \"Save\" button, save images to a subdirectory/value": false, "settings/Directory name pattern/visible": true, "settings/Directory name pattern/value": "[date]", "settings/Max prompt words for [prompt_words] pattern/visible": true, "settings/Max prompt words for [prompt_words] pattern/value": 8, "settings/Max prompt words for [prompt_words] pattern/minimum": 1, "settings/Max prompt words for [prompt_words] pattern/maximum": 20, "settings/Max prompt words for [prompt_words] pattern/step": 1, "settings/Tile size for ESRGAN upscalers./visible": true, "settings/Tile size for ESRGAN upscalers./value": 192, "settings/Tile size for ESRGAN upscalers./minimum": 0, "settings/Tile size for ESRGAN upscalers./maximum": 512, "settings/Tile size for ESRGAN upscalers./step": 16, "settings/Tile overlap for ESRGAN upscalers./visible": true, "settings/Tile overlap for ESRGAN upscalers./value": 8, "settings/Tile overlap for ESRGAN upscalers./minimum": 0, "settings/Tile overlap for ESRGAN upscalers./maximum": 48, "settings/Tile overlap for ESRGAN upscalers./step": 1, "settings/Upscaler for img2img/visible": true, "settings/Upscaler for img2img/value": null, "settings/LDSR processing steps. Lower = faster/visible": true, "settings/LDSR processing steps. Lower = faster/value": 100, "settings/LDSR processing steps. Lower = faster/minimum": 1, "settings/LDSR processing steps. Lower = faster/maximum": 200, "settings/LDSR processing steps. Lower = faster/step": 1, "settings/Cache LDSR model in memory/visible": true, "settings/Cache LDSR model in memory/value": false, "settings/Tile size for SCUNET upscalers./visible": true, "settings/Tile size for SCUNET upscalers./value": 256, "settings/Tile size for SCUNET upscalers./minimum": 0, "settings/Tile size for SCUNET upscalers./maximum": 512, "settings/Tile size for SCUNET upscalers./step": 16, "settings/Tile overlap for SCUNET upscalers./visible": true, "settings/Tile overlap for SCUNET upscalers./value": 8, "settings/Tile overlap for SCUNET upscalers./minimum": 0, "settings/Tile overlap for SCUNET upscalers./maximum": 64, "settings/Tile overlap for SCUNET upscalers./step": 1, "settings/Tile size for all SwinIR./visible": true, "settings/Tile size for all SwinIR./value": 192, "settings/Tile size for all SwinIR./minimum": 16, "settings/Tile size for all SwinIR./maximum": 512, "settings/Tile size for all SwinIR./step": 16, "settings/Tile overlap, in pixels for SwinIR. Low values = visible seam./visible": true, "settings/Tile overlap, in pixels for SwinIR. Low values = visible seam./value": 8, "settings/Tile overlap, in pixels for SwinIR. Low values = visible seam./minimum": 0, "settings/Tile overlap, in pixels for SwinIR. Low values = visible seam./maximum": 48, "settings/Tile overlap, in pixels for SwinIR. Low values = visible seam./step": 1, "settings/Face restoration model/visible": true, "settings/Face restoration model/value": "CodeFormer", "settings/CodeFormer weight/visible": true, "settings/CodeFormer weight/value": 0.5, "settings/CodeFormer weight/minimum": 0, "settings/CodeFormer weight/maximum": 1, "settings/CodeFormer weight/step": 0.01, "settings/Move face restoration model from VRAM into RAM after processing/visible": true, "settings/Move face restoration model from VRAM into RAM after processing/value": false, "settings/Show warnings in console./visible": true, "settings/Show warnings in console./value": false, "settings/VRAM usage polls per second during generation./visible": true, "settings/VRAM usage polls per second during generation./value": 8, "settings/VRAM usage polls per second during generation./minimum": 0, "settings/VRAM usage polls per second during generation./maximum": 40, "settings/VRAM usage polls per second during generation./step": 1, "settings/Always print all generation info to standard output/visible": true, "settings/Always print all generation info to standard output/value": false, "settings/Add a second progress bar to the console that shows progress for an entire job./visible": true, "settings/Add a second progress bar to the console that shows progress for an entire job./value": true, "settings/Print extra hypernetwork information to console./visible": true, "settings/Print extra hypernetwork information to console./value": false, "settings/Load models/files in hidden directories/visible": true, "settings/Load models/files in hidden directories/value": true, "settings/Move VAE and CLIP to RAM when training if possible. Saves VRAM./visible": true, "settings/Move VAE and CLIP to RAM when training if possible. Saves VRAM./value": false, "settings/Turn on pin_memory for DataLoader. Makes training slightly faster but can increase memory usage./visible": true, "settings/Turn on pin_memory for DataLoader. Makes training slightly faster but can increase memory usage./value": false, "settings/Saves Optimizer state as separate *.optim file. Training of embedding or HN can be resumed with the matching optim file./visible": true, "settings/Saves Optimizer state as separate *.optim file. Training of embedding or HN can be resumed with the matching optim file./value": false, "settings/Save textual inversion and hypernet settings to a text file whenever training starts./visible": true, "settings/Save textual inversion and hypernet settings to a text file whenever training starts./value": true, "settings/Filename word regex/visible": true, "settings/Filename word regex/value": "", "settings/Filename join string/visible": true, "settings/Filename join string/value": " ", "settings/Number of repeats for a single input image per epoch; used only for displaying epoch number/visible": true, "settings/Number of repeats for a single input image per epoch; used only for displaying epoch number/value": 1, "settings/Save an csv containing the loss to log directory every N steps, 0 to disable/visible": true, "settings/Save an csv containing the loss to log directory every N steps, 0 to disable/value": 500.0, "settings/Use cross attention optimizations while training/visible": true, "settings/Use cross attention optimizations while training/value": false, "settings/Enable tensorboard logging./visible": true, "settings/Enable tensorboard logging./value": false, "settings/Save generated images within tensorboard./visible": true, "settings/Save generated images within tensorboard./value": false, "settings/How often, in seconds, to flush the pending tensorboard events and summaries to disk./visible": true, "settings/How often, in seconds, to flush the pending tensorboard events and summaries to disk./value": 120.0, "settings/Checkpoints to cache in RAM/visible": true, "settings/Checkpoints to cache in RAM/value": 0, "settings/Checkpoints to cache in RAM/minimum": 0, "settings/Checkpoints to cache in RAM/maximum": 10, "settings/Checkpoints to cache in RAM/step": 1, "settings/VAE Checkpoints to cache in RAM/visible": true, "settings/VAE Checkpoints to cache in RAM/value": 0, "settings/VAE Checkpoints to cache in RAM/minimum": 0, "settings/VAE Checkpoints to cache in RAM/maximum": 10, "settings/VAE Checkpoints to cache in RAM/step": 1, "settings/SD VAE/visible": true, "settings/SD VAE/value": "Automatic", "settings/\ud83d\udd04/visible": true, "settings/Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them/visible": true, "settings/Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them/value": true, "settings/Inpainting conditioning mask strength/visible": true, "settings/Inpainting conditioning mask strength/value": 1.0, "settings/Inpainting conditioning mask strength/minimum": 0.0, "settings/Inpainting conditioning mask strength/maximum": 1.0, "settings/Inpainting conditioning mask strength/step": 0.01, "settings/Noise multiplier for img2img/visible": true, "settings/Noise multiplier for img2img/value": 1.0, "settings/Noise multiplier for img2img/minimum": 0.5, "settings/Noise multiplier for img2img/maximum": 1.5, "settings/Noise multiplier for img2img/step": 0.01, "settings/Apply color correction to img2img results to match original colors./visible": true, "settings/Apply color correction to img2img results to match original colors./value": false, "settings/With img2img, do exactly the amount of steps the slider specifies./visible": true, "settings/With img2img, do exactly the amount of steps the slider specifies./value": false, "settings/Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply./visible": true, "settings/Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply./value": false, "settings/Enable emphasis/visible": true, "settings/Enable emphasis/value": true, "settings/Make K-diffusion samplers produce same images in a batch as when making a single image/visible": true, "settings/Make K-diffusion samplers produce same images in a batch as when making a single image/value": true, "settings/Prompt word wrap length limit/visible": true, "settings/Prompt word wrap length limit/value": 20, "settings/Prompt word wrap length limit/minimum": 0, "settings/Prompt word wrap length limit/maximum": 74, "settings/Prompt word wrap length limit/step": 1, "settings/Clip skip/visible": true, "settings/Clip skip/value": 1, "settings/Clip skip/minimum": 1, "settings/Clip skip/maximum": 12, "settings/Clip skip/step": 1, "settings/Upcast cross attention layer to float32/visible": true, "settings/Upcast cross attention layer to float32/value": false, "settings/Max resolution output for txt2img and img2img/visible": true, "settings/Max resolution output for txt2img and img2img/value": 2048.0, "settings/Random number generator source./visible": true, "settings/Random number generator source./value": "GPU", "settings/Cross attention optimization/visible": true, "settings/Cross attention optimization/value": "Automatic", "settings/Negative Guidance minimum sigma/visible": true, "settings/Negative Guidance minimum sigma/value": 0, "settings/Negative Guidance minimum sigma/minimum": 0.0, "settings/Negative Guidance minimum sigma/maximum": 4.0, "settings/Negative Guidance minimum sigma/step": 0.01, "settings/Token merging ratio/visible": true, "settings/Token merging ratio/value": 0.0, "settings/Token merging ratio/minimum": 0.0, "settings/Token merging ratio/maximum": 0.9, "settings/Token merging ratio/step": 0.1, "settings/Token merging ratio for img2img/visible": true, "settings/Token merging ratio for img2img/value": 0.0, "settings/Token merging ratio for img2img/minimum": 0.0, "settings/Token merging ratio for img2img/maximum": 0.9, "settings/Token merging ratio for img2img/step": 0.1, "settings/Token merging ratio for high-res pass/visible": true, "settings/Token merging ratio for high-res pass/value": 0.0, "settings/Token merging ratio for high-res pass/minimum": 0.0, "settings/Token merging ratio for high-res pass/maximum": 0.9, "settings/Token merging ratio for high-res pass/step": 0.1, "settings/Use old emphasis implementation. Can be useful to reproduce old seeds./visible": true, "settings/Use old emphasis implementation. Can be useful to reproduce old seeds./value": false, "settings/Use old karras scheduler sigmas (0.1 to 10)./visible": true, "settings/Use old karras scheduler sigmas (0.1 to 10)./value": false, "settings/Do not make DPM++ SDE deterministic across different batch sizes./visible": true, "settings/Do not make DPM++ SDE deterministic across different batch sizes./value": false, "settings/For hires fix, use width/height sliders to set final resolution rather than first pass (disables Upscale by, Resize width/height to)./visible": true, "settings/For hires fix, use width/height sliders to set final resolution rather than first pass (disables Upscale by, Resize width/height to)./value": false, "settings/Do not fix prompt schedule for second order samplers./visible": true, "settings/Do not fix prompt schedule for second order samplers./value": false, "settings/Lora: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension/visible": true, "settings/Lora: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension/value": false, "settings/Keep models in VRAM/visible": true, "settings/Keep models in VRAM/value": false, "settings/Include ranks of model tags matches in results./visible": true, "settings/Include ranks of model tags matches in results./value": false, "settings/BLIP: num_beams/visible": true, "settings/BLIP: num_beams/value": 1, "settings/BLIP: num_beams/minimum": 1, "settings/BLIP: num_beams/maximum": 16, "settings/BLIP: num_beams/step": 1, "settings/BLIP: minimum description length/visible": true, "settings/BLIP: minimum description length/value": 24, "settings/BLIP: minimum description length/minimum": 1, "settings/BLIP: minimum description length/maximum": 128, "settings/BLIP: minimum description length/step": 1, "settings/BLIP: maximum description length/visible": true, "settings/BLIP: maximum description length/value": 48, "settings/BLIP: maximum description length/minimum": 1, "settings/BLIP: maximum description length/maximum": 256, "settings/BLIP: maximum description length/step": 1, "settings/CLIP: maximum number of lines in text file/visible": true, "settings/CLIP: maximum number of lines in text file/value": 1500.0, "settings/deepbooru: score threshold/visible": true, "settings/deepbooru: score threshold/value": 0.5, "settings/deepbooru: score threshold/minimum": 0, "settings/deepbooru: score threshold/maximum": 1, "settings/deepbooru: score threshold/step": 0.01, "settings/deepbooru: sort tags alphabetically/visible": true, "settings/deepbooru: sort tags alphabetically/value": true, "settings/deepbooru: use spaces in tags/visible": true, "settings/deepbooru: use spaces in tags/value": true, "settings/deepbooru: escape (\\) brackets/visible": true, "settings/deepbooru: escape (\\) brackets/value": true, "settings/deepbooru: filter out those tags/visible": true, "settings/deepbooru: filter out those tags/value": "", "settings/Show hidden directories/visible": true, "settings/Show hidden directories/value": true, "settings/Show cards for models in hidden directories/visible": true, "settings/Show cards for models in hidden directories/value": "When searched", "settings/Multiplier for extra networks/visible": true, "settings/Multiplier for extra networks/value": 1.0, "settings/Multiplier for extra networks/minimum": 0.0, "settings/Multiplier for extra networks/maximum": 1.0, "settings/Multiplier for extra networks/step": 0.01, "settings/Extra networks separator/visible": true, "settings/Extra networks separator/value": " ", "settings/Add hypernetwork to prompt/visible": true, "settings/Add hypernetwork to prompt/value": "None", "settings/Extra Networks default visibility/visible": true, "settings/Extra Networks default visibility/value": true, "settings/Card size for extra networks/visible": true, "settings/Card size for extra networks/value": 1, "settings/Card size for extra networks/minimum": 0.8, "settings/Card size for extra networks/maximum": 2, "settings/Card size for extra networks/step": 0.1, "settings/Visible card rows for extra networks/visible": true, "settings/Visible card rows for extra networks/value": 1, "settings/Visible card rows for extra networks/minimum": 1, "settings/Visible card rows for extra networks/maximum": 3, "settings/Visible card rows for extra networks/step": 1, "settings/Extra Networks aside view/visible": true, "settings/Extra Networks aside view/value": true, "settings/Add LyCORIS to prompt/visible": true, "settings/Add LyCORIS to prompt/value": "None", "settings/Add Lora to prompt/visible": true, "settings/Add Lora to prompt/value": "None", "settings/When adding to prompt, refer to Lora by/visible": true, "settings/When adding to prompt, refer to Lora by/value": "Alias from file", "settings/Add Lora hashes to infotext/visible": true, "settings/Add Lora hashes to infotext/value": true, "settings/Extra networks tab order/visible": true, "settings/Extra networks tab order/value": "", "settings/Localization/visible": true, "settings/Localization/value": "None", "settings/img2img: height of image editor/visible": true, "settings/img2img: height of image editor/value": 720, "settings/img2img: height of image editor/minimum": 80, "settings/img2img: height of image editor/maximum": 1600, "settings/img2img: height of image editor/step": 1, "settings/Show grid in results for web/visible": true, "settings/Show grid in results for web/value": true, "settings/For inpainting, include the greyscale mask in results for web/visible": true, "settings/For inpainting, include the greyscale mask in results for web/value": false, "settings/For inpainting, include masked composite in results for web/visible": true, "settings/For inpainting, include masked composite in results for web/value": false, "settings/Do not show any images in results for web/visible": true, "settings/Do not show any images in results for web/value": false, "settings/Send seed when sending prompt or image to other interface/visible": true, "settings/Send seed when sending prompt or image to other interface/value": true, "settings/Send size when sending prompt or image to another interface/visible": true, "settings/Send size when sending prompt or image to another interface/value": true, "settings/Font for image grids that have text/visible": true, "settings/Font for image grids that have text/value": "", "settings/Enable full page image viewer/visible": true, "settings/Enable full page image viewer/value": true, "settings/Show images zoomed in by default in full page image viewer/visible": true, "settings/Show images zoomed in by default in full page image viewer/value": true, "settings/Navigate image viewer with gamepad/visible": true, "settings/Navigate image viewer with gamepad/value": false, "settings/Gamepad repeat period, in milliseconds/visible": true, "settings/Gamepad repeat period, in milliseconds/value": 250.0, "settings/Show generation progress in window title./visible": true, "settings/Show generation progress in window title./value": true, "settings/Use dropdown for sampler selection instead of radio group/visible": true, "settings/Use dropdown for sampler selection instead of radio group/value": true, "settings/Show Width/Height and Batch sliders in same row/visible": true, "settings/Show Width/Height and Batch sliders in same row/value": true, "settings/Ctrl+up/down precision when editing (attention:1.1)/visible": true, "settings/Ctrl+up/down precision when editing (attention:1.1)/value": 0.1, "settings/Ctrl+up/down precision when editing (attention:1.1)/minimum": 0.01, "settings/Ctrl+up/down precision when editing (attention:1.1)/maximum": 0.2, "settings/Ctrl+up/down precision when editing (attention:1.1)/step": 0.001, "settings/Ctrl+up/down precision when editing <extra networks:0.9>/visible": true, "settings/Ctrl+up/down precision when editing <extra networks:0.9>/value": 0.05, "settings/Ctrl+up/down precision when editing <extra networks:0.9>/minimum": 0.01, "settings/Ctrl+up/down precision when editing <extra networks:0.9>/maximum": 0.2, "settings/Ctrl+up/down precision when editing <extra networks:0.9>/step": 0.001, "settings/Ctrl+up/down word delimiters/visible": true, "settings/Ctrl+up/down word delimiters/value": ".,\\/!?%^*;:{}=`~()", "settings/txt2img/img2img UI item order/visible": true, "settings/txt2img/img2img UI item order/value": "inpaint, sampler, checkboxes, hires_fix, dimensions, cfg, seed, batch, override_settings, scripts", "settings/Hires fix: show hires sampler selection/visible": true, "settings/Hires fix: show hires sampler selection/value": false, "settings/Hires fix: show hires prompt and negative prompt/visible": true, "settings/Hires fix: show hires prompt and negative prompt/value": false, "settings/Quicksettings/visible": true, "settings/Quicksettings/value": "sd_model_checkpoint", "settings/Hidden Tabs/visible": true, "settings/Hidden Tabs/value": "", "settings/Header Tabs/visible": true, "settings/Header Tabs/value": "", "settings/Interface order input/parameters | output/preview/visible": true, "settings/Interface order input/parameters | output/preview/value": "row-reverse", "settings/Generated image fit method/visible": true, "settings/Generated image fit method/value": "Scale-down", "settings/Show ticks for range sliders/visible": true, "settings/Show ticks for range sliders/value": true, "settings/Dispatch event change on release, for slider and input number components/visible": true, "settings/Dispatch event change on release, for slider and input number components/value": true, "settings/No sliders compact layout mode/visible": true, "settings/No sliders compact layout mode/value": false, "settings/Add model hash to generation information/visible": true, "settings/Add model hash to generation information/value": true, "settings/Add model name to generation information/visible": true, "settings/Add model name to generation information/value": true, "settings/Add program version to generation information/visible": true, "settings/Add program version to generation information/value": true, "settings/When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint./visible": true, "settings/When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint./value": true, "settings/Show progressbar/visible": true, "settings/Show progressbar/value": true, "settings/Show live previews of the created image/visible": true, "settings/Show live previews of the created image/value": true, "settings/Live preview file format/visible": true, "settings/Live preview file format/value": "png", "settings/Show previews of all images generated in a batch as a grid/visible": true, "settings/Show previews of all images generated in a batch as a grid/value": true, "settings/Live preview display period/visible": true, "settings/Live preview display period/value": 10, "settings/Live preview display period/minimum": -1, "settings/Live preview display period/maximum": 32, "settings/Live preview display period/step": 1, "settings/Live preview method/visible": true, "settings/Live preview method/value": "Approx NN", "settings/Live preview subject/visible": true, "settings/Live preview subject/value": "Prompt", "settings/Progressbar and preview update period/visible": true, "settings/Progressbar and preview update period/value": 1000.0, "settings/Live preview image fit method/visible": true, "settings/Live preview image fit method/value": "Scale-down", "settings/Eta for DDIM/visible": true, "settings/Eta for DDIM/value": 0.0, "settings/Eta for DDIM/minimum": 0.0, "settings/Eta for DDIM/maximum": 1.0, "settings/Eta for DDIM/step": 0.01, "settings/Eta for ancestral samplers/visible": true, "settings/Eta for ancestral samplers/value": 1.0, "settings/Eta for ancestral samplers/minimum": 0.0, "settings/Eta for ancestral samplers/maximum": 1.0, "settings/Eta for ancestral samplers/step": 0.01, "settings/img2img DDIM discretize/visible": true, "settings/img2img DDIM discretize/value": "uniform", "settings/sigma churn/visible": true, "settings/sigma churn/value": 0.0, "settings/sigma churn/minimum": 0.0, "settings/sigma churn/maximum": 1.0, "settings/sigma churn/step": 0.01, "settings/sigma tmin/visible": true, "settings/sigma tmin/value": 0.0, "settings/sigma tmin/minimum": 0.0, "settings/sigma tmin/maximum": 1.0, "settings/sigma tmin/step": 0.01, "settings/sigma noise/visible": true, "settings/sigma noise/value": 1.0, "settings/sigma noise/minimum": 0.0, "settings/sigma noise/maximum": 1.0, "settings/sigma noise/step": 0.01, "settings/Eta noise seed delta/visible": true, "settings/Eta noise seed delta/value": 0, "settings/Always discard next-to-last sigma/visible": true, "settings/Always discard next-to-last sigma/value": false, "settings/UniPC variant/visible": true, "settings/UniPC variant/value": "bh1", "settings/UniPC skip type/visible": true, "settings/UniPC skip type/value": "time_uniform", "settings/UniPC order/visible": true, "settings/UniPC order/value": 3, "settings/UniPC order/minimum": 1, "settings/UniPC order/maximum": 50, "settings/UniPC order/step": 1, "settings/UniPC lower order final/visible": true, "settings/UniPC lower order final/value": true, "settings/Maximum number of images in upscaling cache/visible": true, "settings/Maximum number of images in upscaling cache/value": 5, "settings/Maximum number of images in upscaling cache/minimum": 0, "settings/Maximum number of images in upscaling cache/maximum": 10, "settings/Maximum number of images in upscaling cache/step": 1, "settings/Tag filename/visible": true, "settings/Tag filename/value": "danbooru.csv", "settings/Enable Tag Autocompletion/visible": true, "settings/Enable Tag Autocompletion/value": true, "settings/Active in txt2img (Requires restart)/visible": true, "settings/Active in txt2img (Requires restart)/value": true, "settings/Active in img2img (Requires restart)/visible": true, "settings/Active in img2img (Requires restart)/value": true, "settings/Active in negative prompts (Requires restart)/visible": true, "settings/Active in negative prompts (Requires restart)/value": true, "settings/Active in third party textboxes [Dataset Tag Editor] [Image Browser] [Tagger] [Multidiffusion Upscaler] (Requires restart)/visible": true, "settings/Active in third party textboxes [Dataset Tag Editor] [Image Browser] [Tagger] [Multidiffusion Upscaler] (Requires restart)/value": true, "settings/List of model names (with file extension) or their hashes to use as black/whitelist, separated by commas./visible": true, "settings/List of model names (with file extension) or their hashes to use as black/whitelist, separated by commas./value": "", "settings/Mode to use for model list/visible": true, "settings/Mode to use for model list/value": "Blacklist", "settings/Move completion popup together with text cursor/visible": true, "settings/Move completion popup together with text cursor/value": true, "settings/Maximum results/visible": true, "settings/Maximum results/value": 5.0, "settings/Show all results/visible": true, "settings/Show all results/value": false, "settings/How many results to load at once/visible": true, "settings/How many results to load at once/value": 100.0, "settings/Time in ms to wait before triggering completion again (Requires restart)/visible": true, "settings/Time in ms to wait before triggering completion again (Requires restart)/value": 100.0, "settings/Search for wildcards/visible": true, "settings/Search for wildcards/value": true, "settings/Search for embeddings/visible": true, "settings/Search for embeddings/value": true, "settings/Search for hypernetworks/visible": true, "settings/Search for hypernetworks/value": true, "settings/Search for Loras/visible": true, "settings/Search for Loras/value": true, "settings/Search for LyCORIS/LoHa/visible": true, "settings/Search for LyCORIS/LoHa/value": true, "settings/Show '?' next to tags, linking to its Danbooru or e621 wiki page (Warning: This is an external site and very likely contains NSFW examples!)/visible": true, "settings/Show '?' next to tags, linking to its Danbooru or e621 wiki page (Warning: This is an external site and very likely contains NSFW examples!)/value": false, "settings/Replace underscores with spaces on insertion/visible": true, "settings/Replace underscores with spaces on insertion/value": true, "settings/Escape parentheses on insertion/visible": true, "settings/Escape parentheses on insertion/value": true, "settings/Append comma on tag autocompletion/visible": true, "settings/Append comma on tag autocompletion/value": true, "settings/Search by alias/visible": true, "settings/Search by alias/value": true, "settings/Only show alias/visible": true, "settings/Only show alias/value": false, "settings/Translation filename/visible": true, "settings/Translation filename/value": "None", "settings/Translation file uses old 3-column translation format instead of the new 2-column one/visible": true, "settings/Translation file uses old 3-column translation format instead of the new 2-column one/value": false, "settings/Search by translation/visible": true, "settings/Search by translation/value": true, "settings/Extra filename (for small sets of custom tags)/visible": true, "settings/Extra filename (for small sets of custom tags)/value": "extra-quality-tags.csv", "settings/Mode to add the extra tags to the main tag list/visible": true, "settings/Mode to add the extra tags to the main tag list/value": "Insert before", "settings/Extra paths to scan for LoRA models, comma-separated. Paths containing commas must be enclosed in double quotes. In the path, \" (one quote) must be replaced by \"\" (two quotes)./visible": true, "settings/Extra paths to scan for LoRA models, comma-separated. Paths containing commas must be enclosed in double quotes. In the path, \" (one quote) must be replaced by \"\" (two quotes)./value": "", "settings/Sort LoRA models by/visible": true, "settings/Sort LoRA models by/value": "name", "settings/Reverse model sort order/visible": true, "settings/Reverse model sort order/value": false, "settings/LoRA model name filter/visible": true, "settings/LoRA model name filter/value": "", "settings/Metadata to show in XY-Grid label for Model axes, comma-separated (example: \"ss_learning_rate, ss_num_epochs\")/visible": true, "settings/Metadata to show in XY-Grid label for Model axes, comma-separated (example: \"ss_learning_rate, ss_num_epochs\")/value": "", "settings/# of threads to use for hash calculation (increase if using an SSD)/visible": true, "settings/# of threads to use for hash calculation (increase if using an SSD)/value": 1.0, "settings/Make a backup copy of the model being edited when saving its metadata./visible": true, "settings/Make a backup copy of the model being edited when saving its metadata./value": true, "settings/Only show .safetensors format models/visible": true, "settings/Only show .safetensors format models/value": false, "settings/Only show models that have/don't have user-added metadata/visible": true, "settings/Only show models that have/don't have user-added metadata/value": "disabled", "settings/Max number of top tags to show/visible": true, "settings/Max number of top tags to show/value": 20.0, "settings/Max number of dataset folders to show/visible": true, "settings/Max number of dataset folders to show/value": 20.0, "settings/Max models/visible": true, "settings/Max models/value": 2, "settings/Max models/minimum": 1, "settings/Max models/maximum": 5, "settings/Max models/step": 1, "settings/Save mask previews/visible": true, "settings/Save mask previews/value": false, "settings/Save images before ADetailer/visible": true, "settings/Save images before ADetailer/value": false, "settings/Apply only selected scripts to ADetailer/visible": true, "settings/Apply only selected scripts to ADetailer/value": true, "settings/Script names to apply to ADetailer (separated by comma)/visible": true, "settings/Script names to apply to ADetailer (separated by comma)/value": "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards", "settings/Sort bounding boxes by/visible": true, "settings/Sort bounding boxes by/value": "None", "settings/Config file for Control Net models/visible": true, "settings/Config file for Control Net models/value": "models/cldm_v15.yaml", "settings/Config file for Adapter models/visible": true, "settings/Config file for Adapter models/value": "models/t2iadapter_sketch_sd14v1.yaml", "settings/Directory for detected maps auto saving/visible": true, "settings/Directory for detected maps auto saving/value": "detected_maps", "settings/Extra path to scan for ControlNet models (e.g. training output directory)/visible": true, "settings/Extra path to scan for ControlNet models (e.g. training output directory)/value": "", "settings/Path to directory containing annotator model directories (requires restart, overrides corresponding command line flag)/visible": true, "settings/Path to directory containing annotator model directories (requires restart, overrides corresponding command line flag)/value": "", "settings/Multi ControlNet: Max models amount (requires restart)/visible": true, "settings/Multi ControlNet: Max models amount (requires restart)/value": 1, "settings/Multi ControlNet: Max models amount (requires restart)/minimum": 1, "settings/Multi ControlNet: Max models amount (requires restart)/maximum": 10, "settings/Multi ControlNet: Max models amount (requires restart)/step": 1, "settings/Model cache size (requires restart)/visible": true, "settings/Model cache size (requires restart)/value": 1, "settings/Model cache size (requires restart)/minimum": 1, "settings/Model cache size (requires restart)/maximum": 5, "settings/Model cache size (requires restart)/step": 1, "settings/Do not append detectmap to output/visible": true, "settings/Do not append detectmap to output/value": false, "settings/Allow detectmap auto saving/visible": true, "settings/Allow detectmap auto saving/value": false, "settings/Allow other script to control this extension/visible": true, "settings/Allow other script to control this extension/value": false, "settings/Passing ControlNet parameters with \"Send to img2img\"/visible": true, "settings/Passing ControlNet parameters with \"Send to img2img\"/value": false, "settings/Show batch images in gradio gallery output/visible": true, "settings/Show batch images in gradio gallery output/value": false, "settings/Increment seed after each controlnet batch iteration/visible": true, "settings/Increment seed after each controlnet batch iteration/value": false, "settings/Disable control type selection/visible": true, "settings/Disable control type selection/value": false, "settings/List of active tabs (separated by commas). Available options are txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, Maintenance. Custom folders are also supported by specifying their path./visible": true, "settings/List of active tabs (separated by commas). Available options are txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, Maintenance. Custom folders are also supported by specifying their path./value": "txt2img, img2img, txt2img-grids, img2img-grids, Extras, Favorites, Others, Maintenance", "settings/Include images in sub directories/visible": true, "settings/Include images in sub directories/value": true, "settings/Preload images at startup for first tab/visible": true, "settings/Preload images at startup for first tab/value": false, "settings/Move buttons copy instead of move/visible": true, "settings/Move buttons copy instead of move/value": false, "settings/Print image deletion messages to the console/visible": true, "settings/Print image deletion messages to the console/value": true, "settings/Move/Copy/Delete matching .txt files/visible": true, "settings/Move/Copy/Delete matching .txt files/value": true, "settings/Debug level/visible": true, "settings/Debug level/value": "0 - none", "settings/Use recycle bin when deleting images/visible": true, "settings/Use recycle bin when deleting images/value": true, "settings/Scan Exif-/.txt-data (initially slower, but required for many features to work)/visible": true, "settings/Scan Exif-/.txt-data (initially slower, but required for many features to work)/value": true, "settings/Change CTRL keybindings to SHIFT/visible": true, "settings/Change CTRL keybindings to SHIFT/value": false, "settings/or to CTRL+SHIFT/visible": true, "settings/or to CTRL+SHIFT/value": false, "settings/Enable Maintenance tab/visible": true, "settings/Enable Maintenance tab/value": true, "settings/Save ranking in image's pnginfo/visible": true, "settings/Save ranking in image's pnginfo/value": false, "settings/Number of columns on the page/visible": true, "settings/Number of columns on the page/value": 6.0, "settings/Number of rows on the page/visible": true, "settings/Number of rows on the page/value": 6.0, "settings/Minimum number of pages per load/visible": true, "settings/Minimum number of pages per load/value": 20.0, "settings/Use optimized images in the thumbnail interface (significantly reduces the amount of data transferred)/visible": true, "settings/Use optimized images in the thumbnail interface (significantly reduces the amount of data transferred)/value": false, "settings/Size of the thumbnails (px)/visible": true, "settings/Size of the thumbnails (px)/value": 200.0, "settings/Swipe left/right navigates to the next image/visible": true, "settings/Swipe left/right navigates to the next image/value": false, "settings/Enable thumbnail tooltips/visible": true, "settings/Enable thumbnail tooltips/value": true, "settings/Default scoring type/visible": true, "settings/Default scoring type/value": "aesthetic_score", "settings/Show progress indicator/visible": true, "settings/Show progress indicator/value": true, "settings/View changes/visible": true, "settings/Apply/visible": true, "settings/Request browser notifications/visible": true, "settings/Download localization template/visible": true, "settings/Reload custom script bodies (No ui updates, No restart)/visible": true, "settings/Unload SD checkpoint to free VRAM/visible": true, "settings/Reload the last SD checkpoint back into VRAM/visible": true, "settings/Show all pages/visible": true, "extensions/Tabs@tabs_extensions/selected": null, "extensions/Apply and restart UI/visible": true, "extensions/Check for updates/visible": true, "extensions/Disable all extensions/visible": true, "extensions/Disable all extensions/value": "none", "extensions/Load from:/visible": true, "extensions/Extension index URL/visible": true, "extensions/Extension index URL/value": "https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui-extensions/master/index.json", "extensions/Order/visible": true, "extensions/Order/value": "newest first", "extensions/Search/visible": true, "extensions/Search/value": "", "extensions/URL for extension's git repository/visible": true, "extensions/URL for extension's git repository/value": "", "extensions/Specific branch name/visible": true, "extensions/Specific branch name/value": "", "extensions/Local directory name/visible": true, "extensions/Local directory name/value": "", "extensions/Install/visible": true, "extensions/Saved Configs/visible": true, "extensions/Saved Configs/value": "Current", "extensions/\ud83d\udd04/visible": true, "extensions/State to restore/visible": true, "extensions/State to restore/value": "extensions", "extensions/Restore Selected Config/visible": true, "extensions/Save Current Config/visible": true, "webui/Tabs@tabs/selected": null }
digiplay/LuckyStrikeMix1.05_Lovelylady
digiplay
2023-07-28T10:05:01Z
532
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-28T09:20:36Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/13034/lucky-strike-mix https://civitai.com/models/13034?modelVersionId=127680 *use "photorealism", "8k" keywords, could generate better images. Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9198406a-170c-4d69-8d65-ce961eaca5c2/width=1280/02628-1796431482-1%20supercute%20kitty%20wear%20a%20origami%20gundam%20armor,fur.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2f87589f-3b8d-4c26-b2e9-5ba9645697cd/width=1280/02644-2789515522-1%20supercute%20cat%20wear%20a%20origami%20gundam%20armor,fur.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5993b948-3a2a-473f-b21c-928f109f656b/width=1280/02656-2033477510-a%20superc%20ute%20kitty%20wear%20Ultraman%20armor%20and%20Ultraman%20mask,.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/13dbc7a2-f1de-493c-a771-91e4746f68b6/width=1280/02690-807894338-lionel%20messi%20in%20pink%20football%20kit,%20(geometric%20mosaic_1.4),%20(digital%20art%20style_1.4).jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d562d789-dd0c-4fe3-aca9-486747f92f16/width=1280/02707-3559411941-screenshot%20of%20person's%20profile%20in%20Tinder%20app,%20buttons%20of%20user%20interface,%20her%20name%20and%20age%20in%20large%20headline%20text,%20self-introduct.jpeg)
asifahmed/open_llama_13b_NH
asifahmed
2023-07-28T10:01:35Z
9
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T09:47:47Z
--- language: - en tags: - llama-2 - self-instruct - distillation - synthetic instruction license: - mit --- # Model Card: Nous-Hermes-Llama2-13b Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI. ## Model Description Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable. This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine. ## Example Outputs: ![Example4](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example5.png "Example 4") ![Example1](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/Example1.png "Example 1") ![Example2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example2.png "Example 2") ![Example3](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example3.png "Example 3") ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI. Special mention goes to @winglian for assisting in some of the training issues. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Among the contributors of datasets: - GPTeacher was made available by Teknium - Wizard LM by nlpxucan - Nous Research Instruct Dataset was provided by Karan4D and HueminArt. - GPT4-LLM and Unnatural Instructions were provided by Microsoft - Airoboros dataset by jondurbin - Camel-AI's domain expert datasets are from Camel-AI - CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` or ``` ### Instruction: <prompt> ### Input: <additional context> ### Response: <leave a newline blank for model to respond> ``` ## Benchmark Results AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |agieval_aqua_rat | 0|acc |0.2362|± |0.0267| | | |acc_norm|0.2480|± |0.0272| |agieval_logiqa_en | 0|acc |0.3425|± |0.0186| | | |acc_norm|0.3472|± |0.0187| |agieval_lsat_ar | 0|acc |0.2522|± |0.0287| | | |acc_norm|0.2087|± |0.0269| |agieval_lsat_lr | 0|acc |0.3510|± |0.0212| | | |acc_norm|0.3627|± |0.0213| |agieval_lsat_rc | 0|acc |0.4647|± |0.0305| | | |acc_norm|0.4424|± |0.0303| |agieval_sat_en | 0|acc |0.6602|± |0.0331| | | |acc_norm|0.6165|± |0.0340| |agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346| | | |acc_norm|0.4272|± |0.0345| |agieval_sat_math | 0|acc |0.2909|± |0.0307| | | |acc_norm|0.2727|± |0.0301| ``` GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |arc_challenge| 0|acc |0.5102|± |0.0146| | | |acc_norm|0.5213|± |0.0146| |arc_easy | 0|acc |0.7959|± |0.0083| | | |acc_norm|0.7567|± |0.0088| |boolq | 1|acc |0.8394|± |0.0064| |hellaswag | 0|acc |0.6164|± |0.0049| | | |acc_norm|0.8009|± |0.0040| |openbookqa | 0|acc |0.3580|± |0.0215| | | |acc_norm|0.4620|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8069|± |0.0092| |winogrande | 0|acc |0.7127|± |0.0127| ``` BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362| |bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192| |bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111| |bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123| |bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287| ``` These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores: - GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1 - 0.3657 on BigBench, up from 0.328 on hermes-llama1 - 0.372 on AGIEval, up from 0.354 on Hermes-llama1 These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position. ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
tommilyjones/swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured
tommilyjones
2023-07-28T09:57:28Z
212
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T09:36:50Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.53 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-masked-hateful-meme-restructured This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7166 - Accuracy: 0.53 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6507 | 0.99 | 66 | 0.7352 | 0.502 | | 0.6411 | 2.0 | 133 | 0.7070 | 0.528 | | 0.6268 | 2.99 | 199 | 0.7166 | 0.53 | | 0.6007 | 4.0 | 266 | 0.7934 | 0.506 | | 0.5875 | 4.99 | 332 | 0.8053 | 0.52 | | 0.5554 | 6.0 | 399 | 0.7534 | 0.524 | | 0.5613 | 6.99 | 465 | 0.8075 | 0.524 | | 0.5714 | 8.0 | 532 | 0.7882 | 0.522 | | 0.5244 | 8.99 | 598 | 0.8380 | 0.518 | | 0.5251 | 9.92 | 660 | 0.8331 | 0.52 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
accuracy-maker/ppo-LunarLander-v2
accuracy-maker
2023-07-28T09:53:38Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T09:53:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.96 +/- 17.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ding-Qiang/q-FrozenLake-v1-4x4-Slippery
Ding-Qiang
2023-07-28T09:43:58Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T09:42:45Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-Slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.29 +/- 0.45 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Ding-Qiang/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rashmi035/wav2vec2-large-mms-1b-hindi-colab
rashmi035
2023-07-28T09:33:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_6_1", "base_model:facebook/mms-1b-fl102", "base_model:finetune:facebook/mms-1b-fl102", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-12T05:29:24Z
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-fl102 tags: - generated_from_trainer datasets: - common_voice_6_1 metrics: - wer model-index: - name: wav2vec2-large-mms-1b-hindi-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_6_1 type: common_voice_6_1 config: hi split: test args: hi metrics: - name: Wer type: wer value: 0.32018561484918795 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-hindi-colab This model is a fine-tuned version of [facebook/mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102) on the common_voice_6_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.3535 - Wer: 0.3202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 16.7585 | 0.14 | 10 | 10.2106 | 2.0951 | | 6.9602 | 0.27 | 20 | 3.7700 | 1.0046 | | 2.4653 | 0.41 | 30 | 1.3321 | 0.6763 | | 1.0919 | 0.55 | 40 | 0.6594 | 0.4664 | | 0.7645 | 0.68 | 50 | 0.4930 | 0.3910 | | 0.8434 | 0.82 | 60 | 0.4819 | 0.3898 | | 0.5118 | 0.96 | 70 | 0.4492 | 0.3817 | | 0.6097 | 1.1 | 80 | 0.4299 | 0.4327 | | 0.4698 | 1.23 | 90 | 0.4308 | 0.3643 | | 0.5402 | 1.37 | 100 | 0.4042 | 0.4107 | | 0.5622 | 1.51 | 110 | 0.4156 | 0.3701 | | 0.4084 | 1.64 | 120 | 0.4138 | 0.3701 | | 0.4888 | 1.78 | 130 | 0.3917 | 0.3434 | | 0.4253 | 1.92 | 140 | 0.3852 | 0.3457 | | 0.5004 | 2.05 | 150 | 0.3843 | 0.3364 | | 0.3791 | 2.19 | 160 | 0.3841 | 0.3469 | | 0.3302 | 2.33 | 170 | 0.3764 | 0.3271 | | 0.4047 | 2.47 | 180 | 0.3689 | 0.3364 | | 0.2951 | 2.6 | 190 | 0.3657 | 0.3329 | | 0.3545 | 2.74 | 200 | 0.3582 | 0.3306 | | 0.3736 | 2.88 | 210 | 0.3585 | 0.3248 | | 0.388 | 3.01 | 220 | 0.3602 | 0.3237 | | 0.2997 | 3.15 | 230 | 0.3624 | 0.3167 | | 0.3704 | 3.29 | 240 | 0.3625 | 0.3190 | | 0.2095 | 3.42 | 250 | 0.3571 | 0.3248 | | 0.3564 | 3.56 | 260 | 0.3570 | 0.3202 | | 0.2119 | 3.7 | 270 | 0.3550 | 0.3225 | | 0.3697 | 3.84 | 280 | 0.3542 | 0.3190 | | 0.3551 | 3.97 | 290 | 0.3535 | 0.3202 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
manuu01/ppo-SnowballTarget
manuu01
2023-07-28T09:30:48Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-28T09:30:46Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: manuu01/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
tommilyjones/swin-tiny-patch4-window7-224-finetuned-hateful-meme-restructured
tommilyjones
2023-07-28T09:26:38Z
215
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T09:04:55Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-hateful-meme-restructured results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.52 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-hateful-meme-restructured This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8519 - Accuracy: 0.52 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6441 | 0.99 | 66 | 0.7419 | 0.492 | | 0.6368 | 2.0 | 133 | 0.7235 | 0.51 | | 0.6157 | 2.99 | 199 | 0.7516 | 0.504 | | 0.5928 | 4.0 | 266 | 0.8009 | 0.502 | | 0.5735 | 4.99 | 332 | 0.8270 | 0.508 | | 0.5559 | 6.0 | 399 | 0.7804 | 0.502 | | 0.5533 | 6.99 | 465 | 0.8053 | 0.486 | | 0.5541 | 8.0 | 532 | 0.8078 | 0.504 | | 0.5218 | 8.99 | 598 | 0.8519 | 0.52 | | 0.5226 | 9.92 | 660 | 0.8522 | 0.508 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
HaziqRazali/Reinforce-CartPole
HaziqRazali
2023-07-28T09:20:00Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T09:19:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 490.40 +/- 19.53 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
lixsh6/wsdm23_pretrain
lixsh6
2023-07-28T09:19:26Z
0
0
null
[ "arxiv:2302.13756", "arxiv:2302.13498", "region:us" ]
null
2023-01-13T07:08:47Z
# WSDM Cup 2023 BERT Checkpoints: - This repo contains the checkpoints of our competition in WSDM Cup 2023: [Pre-training for Web Search](https://aistudio.baidu.com/aistudio/competition/detail/536/0/leaderboard) and [Unbiased Learning for Web Search](https://aistudio.baidu.com/aistudio/competition/detail/534/0/leaderboard). ## Paper released Please refer to our paper for details in this competition: - Task1 Unbiased Learning to rank: [Multi-Feature Integration for Perception-Dependent Examination-Bias Estimation](https://arxiv.org/pdf/2302.13756.pdf) - Task2 Pretraining for web search: [Pretraining De-Biased Language Model with Large-scale Click Logs for Document Ranking](https://arxiv.org/pdf/2302.13498.pdf) ## Method Overview - Pre-training BERT with MLM and CTR prediction loss (or multi-task CTR prediction loss). - Finetuning BERT with pairwise ranking loss. - Obtain prediction scores from different BERTs. - Ensemble learning to combine BERT features and sparse features. Details will be updated in the submission paper. #### BERT features: ##### 1) Model details: [Checkpoints Download Here](https://huggingface.co/lixsh6/wsdm23_pretrain/tree/main) | Index| Model Flag | Method | Pretrain step | Finetune step | DCG on leaderboard | | --------| -------- | ------- |---------------| ------- | ------- | | 1| large_group2_wwm_from_unw4625K | M1 | 1700K | 5130 | 11.96214 | | 2| large_group2_wwm_from_unw4625K | M1 | 1700K | 5130 | NAN | | 3| base_group2_wwm | M2 | 2150K | 5130 | ~11.32363 | | 4| large_group2_wwm_from_unw4625K | M1 | 590K | 5130 | 11.94845 | | 5| large_group2_wwm_from_unw4625K | M1 | 1700K | 4180 | NAN | | 6| large_group2_mt_pretrain | M3 | 1940K | 5130 | NAN | ##### 2) Method details | Method | Model Layers | Details | | -------- | ------- | ------- | | M1 | 24 | WWM & CTR prediction as pretraining tasks| | M2 | 12 | WWM & CTR prediction as pretraining tasks | | M3 | 24 | WWM & Multi-task CTR prediction as pretraining tasks| ## Contacts - Xiangsheng Li: [lixsh6@gmail.com](lixsh6@gmail.com). - Xiaoshu Chen: [xschenranker@gmail.com](xschenranker@gmail.com)
privatelora/Lora
privatelora
2023-07-28T09:12:00Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-26T13:08:58Z
--- license: creativeml-openrail-m ---
kamalchibrani/yolov8_fall_detection_25
kamalchibrani
2023-07-28T09:10:52Z
0
0
null
[ "dataset:kamalchibrani/fall_detection", "license:openrail", "region:us" ]
null
2023-07-28T08:59:29Z
--- license: openrail datasets: - kamalchibrani/fall_detection metrics: - accuracy ---
sdocio/bne-spacy-corgale-ner-es
sdocio
2023-07-28T09:08:31Z
2
0
spacy
[ "spacy", "token-classification", "es", "license:gpl-3.0", "model-index", "region:us" ]
token-classification
2023-01-07T23:02:41Z
--- license: gpl-3.0 language: - es library_name: spacy pipeline_tag: token-classification tags: - spacy - token-classification widget: - text: "Fue antes de llegar a Sigüeiro, en el Camino de Santiago." - text: "Si te metes en el Franco desde la Alameda, vas hacia la Catedral." - text: "Y allí precisamente es Santiago el patrón del pueblo." model-index: - name: bne-spacy-corgale-ner-es results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9721311475 - name: NER Recall type: recall value: 0.9732708089 - name: NER F Score type: f_score value: 0.9727006444 --- # Introduction spaCy NER model for Spanish trained with interviews in the domain of tourism related to the Way of Saint Jacques. It recognizes four types of entities: location (LOC), organizations (ORG), person (PER) and miscellaneous (MISC). It was fine-tuned using `PlanTL-GOB-ES/roberta-base-bne`. | Feature | Description | | --- | --- | | **Name** | `bne-spacy-corgale-ner-es` | | **Version** | `0.0.2` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ## Usage You can use this model with the spaCy *pipeline* for NER. ```python import spacy from spacy.pipeline import merge_entities nlp = spacy.load("bne-spacy-corgale-ner-es") nlp.add_pipe('sentencizer') example = "Fue antes de llegar a Sigüeiro, en el Camino de Santiago. Si te metes en el Franco desde la Alameda, vas hacia la Catedral. Y allí precisamente es Santiago el patrón del pueblo." ner_pipe = nlp(example) print(ner_pipe.ents) for token in merge_entities(ner_pipe): print(token.text, token.ent_type_) ``` ## Dataset ToDo ## Model performance entity|precision|recall|f1 -|-|-|- LOC|0.985|0.987|0.986 MISC|0.862|0.865|0.863 ORG|0.938|0.779|0.851 PER|0.921|0.941|0.931 micro avg|0.971|0.972|0.971 macro avg|0.926|0.893|0.908 weighted avg|0.971|0.972|0.971
caozhanqiang/llama2-glora-finetunined-french
caozhanqiang
2023-07-28T09:08:14Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-28T09:07:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
michaelfeil/ct2fast-starcoderbase-3b
michaelfeil
2023-07-28T09:07:31Z
4
0
transformers
[ "transformers", "gpt_bigcode", "text-generation", "ctranslate2", "int8", "float16", "code", "dataset:bigcode/the-stack-dedup", "arxiv:1911.02150", "arxiv:2205.14135", "arxiv:2207.14255", "arxiv:2305.06161", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T08:45:03Z
--- pipeline_tag: text-generation inference: true widget: - text: 'def print_hello_world():' example_title: Hello world group: Python license: bigcode-openrail-m datasets: - bigcode/the-stack-dedup metrics: - code_eval library_name: transformers tags: - ctranslate2 - int8 - float16 - code model-index: - name: StarCoderBase-3B results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 21.46 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (C++) metrics: - name: pass@1 type: pass@1 value: 19.43 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Java) metrics: - name: pass@1 type: pass@1 value: 19.25 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (JavaScript) metrics: - name: pass@1 type: pass@1 value: 21.32 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (PHP) metrics: - name: pass@1 type: pass@1 value: 18.55 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Lua) metrics: - name: pass@1 type: pass@1 value: 18.04 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Rust) metrics: - name: pass@1 type: pass@1 value: 16.32 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Swift) metrics: - name: pass@1 type: pass@1 value: 9.98 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Julia) metrics: - name: pass@1 type: pass@1 value: 16.10 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (R) metrics: - name: pass@1 type: pass@1 value: 10.10 verified: false extra_gated_prompt: >- ## Model License Agreement Please read the BigCode [OpenRAIL-M license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) agreement before accepting it. extra_gated_fields: I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox --- # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [bigcode/starcoderbase-3b](https://huggingface.co/bigcode/starcoderbase-3b) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-starcoderbase-3b" from hf_hub_ctranslate2 import GeneratorCT2fromHfHub model = GeneratorCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16", # tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}") ) outputs = model.generate( text=["def fibonnaci(", "User: How are you doing? Bot:"], max_length=64, include_prompt_in_result=False ) print(outputs) ``` Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-07-28 using ``` LLama-2 -> removed <pad> token. ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description # StarCoder-3B 3B version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase). ## Table of Contents 1. [Model Summary](##model-summary) 2. [Use](##use) 3. [Limitations](##limitations) 4. [Training](##training) 5. [License](##license) 6. [Citation](##citation) ## Model Summary StarCoder-3B is a 3B parameter model trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens. - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org) - **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161) - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org) - **Languages:** 80+ Programming languages ## Use ### Intended use The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant. **Feel free to share your generations in the Community tab!** ### Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/starcoderbase-3b" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Fill-in-the-middle Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output: ```python input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>" inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ### Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code. # Limitations The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations. # Training ## Model - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective - **Pretraining steps:** 500k - **Pretraining tokens:** 1 trillion - **Precision:** bfloat16 ## Hardware - **GPUs:** 256 Tesla A100 - **Training time:** 12 days ## Software - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # License The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement). # Citation ``` @article{li2023starcoder, title={StarCoder: may the source be with you!}, author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries}, year={2023}, eprint={2305.06161}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
oljike/nurtas_db_lora
oljike
2023-07-28T09:04:38Z
0
1
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-28T07:05:24Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of kairat nurtas tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - oljike/nurtas_db_lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of kairat nurtas using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) LoRA for the text encoder was enabled: False.
dvs/videomae-base-finetuned-movienet
dvs
2023-07-28T09:04:31Z
60
0
transformers
[ "transformers", "pytorch", "tensorboard", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-07-28T05:53:05Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer model-index: - name: videomae-base-finetuned-movienet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-movienet This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.2558 - eval_accuracy: 0.6823 - eval_runtime: 120.548 - eval_samples_per_second: 1.593 - eval_steps_per_second: 0.199 - epoch: 6.1 - step: 1266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1480 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
openlamm/lamm_7b_lora32_98k
openlamm
2023-07-28T09:00:14Z
6
0
transformers
[ "transformers", "llama", "text-generation", "en", "dataset:caojianjian/LAMM", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-10T15:18:52Z
--- license: apache-2.0 datasets: - caojianjian/LAMM language: - en library_name: transformers ---
tommilyjones/vit-base-patch16-224-finetuned-hateful-meme-restructured
tommilyjones
2023-07-28T08:39:28Z
192
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T08:04:55Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-finetuned-hateful-meme-restructured results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.552 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-hateful-meme-restructured This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7152 - Accuracy: 0.552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6546 | 0.99 | 66 | 0.7185 | 0.52 | | 0.6222 | 2.0 | 133 | 0.7152 | 0.552 | | 0.5986 | 2.99 | 199 | 0.7344 | 0.542 | | 0.5535 | 4.0 | 266 | 0.7782 | 0.514 | | 0.5377 | 4.99 | 332 | 0.8329 | 0.514 | | 0.5115 | 6.0 | 399 | 0.7596 | 0.528 | | 0.5133 | 6.99 | 465 | 0.8151 | 0.512 | | 0.511 | 8.0 | 532 | 0.7897 | 0.538 | | 0.4712 | 8.99 | 598 | 0.8539 | 0.514 | | 0.4626 | 9.92 | 660 | 0.8449 | 0.522 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
iliyaML/my_awesome_eli5_clm-model
iliyaML
2023-07-28T08:33:55Z
132
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T08:00:06Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8773 | 1.0 | 1121 | 3.7568 | | 3.7788 | 2.0 | 2242 | 3.7430 | | 3.7441 | 3.0 | 3363 | 3.7399 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
darveen/llama2-qlora-finetuned-alpaca-40steps
darveen
2023-07-28T08:27:20Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-23T03:32:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
Imxxn/RLCourseU5-Pyramids
Imxxn
2023-07-28T08:22:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-28T08:22:50Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Imxxn/RLCourseU5-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
HaziqRazali/dqn-SpaceInvadersNoFrameskip-v4
HaziqRazali
2023-07-28T08:12:40Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T08:12:17Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 133.50 +/- 35.22 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HaziqRazali -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HaziqRazali -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HaziqRazali ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 1000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 1000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 100), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
LarryAIDraw/genshinfull1-000006
LarryAIDraw
2023-07-28T07:59:16Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-27T17:04:46Z
--- license: creativeml-openrail-m --- https://civitai.com/models/108649/genshin-impact-all-in-one-or-character-lora-43336
Imxxn/RLCourseU5-SnowballTarget
Imxxn
2023-07-28T07:49:37Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-28T07:49:33Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Imxxn/RLCourseU5-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
loucad/swin-tiny-patch4-window7-224-finetuned-eurosat
loucad
2023-07-28T07:48:19Z
214
0
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-19T07:03:50Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9751851851851852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0797 - Accuracy: 0.9752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.284 | 1.0 | 190 | 0.1307 | 0.9559 | | 0.1839 | 2.0 | 380 | 0.1056 | 0.9681 | | 0.1339 | 3.0 | 570 | 0.0797 | 0.9752 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.1 - Tokenizers 0.13.3
DAMO-NLP-MT/polylm-chat-13b
DAMO-NLP-MT
2023-07-28T07:43:26Z
1,472
6
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "arxiv:2307.06018", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T06:46:08Z
--- license: apache-2.0 --- # Model Card for PolyLM-Multialpaca This model is finetuned on [polyLM-13b](https://huggingface.co/DAMO-NLP-MT/polylm-13b) using the following datasets: # Demo [Open](https://modelscope.cn/studios/damo/demo-polylm-multialpaca-13b/summary) # Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2307.06018.pdf): > Our contributions are fully methodological: adding the support of multilingualism to LLM during training and SFT phases. It is unavoidable that PolyLM might exhibit several common deficiencies of language models, e.g. hallucination and toxicity. PolyLM should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. # Citation **BibTeX:** ```bibtex @misc{wei2023polylm, title={PolyLM: An Open Source Polyglot Large Language Model}, author={Xiangpeng Wei and Haoran Wei and Huan Lin and Tianhao Li and Pei Zhang and Xingzhang Ren and Mei Li and Yu Wan and Zhiwei Cao and Binbin Xie and Tianxiang Hu and Shangjie Li and Binyuan Hui and Bowen Yu and Dayiheng Liu and Baosong Yang and Fei Huang and Jun Xie}, year={2023}, eprint={2307.06018}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_unload
Lajonbot
2023-07-28T07:31:07Z
1,398
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T07:20:24Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
BlunderPanini/Taxi-v3
BlunderPanini
2023-07-28T07:26:32Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T07:26:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="BlunderPanini/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Vasanth/criccomm_to_cricnewss
Vasanth
2023-07-28T07:23:48Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-large", "base_model:finetune:google/flan-t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-28T06:03:52Z
--- license: apache-2.0 base_model: google/flan-t5-large tags: - generated_from_trainer model-index: - name: criccomm_to_cricnewss results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # criccomm_to_cricnewss This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
AnushaPalle/my_awesome_open_llama_3b_clm-model
AnushaPalle
2023-07-28T07:20:57Z
132
0
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:finetune:openlm-research/open_llama_3b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T17:01:38Z
--- license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - generated_from_trainer model-index: - name: my_awesome_open_llama_3b_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_open_llama_3b_clm-model This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | nan | | No log | 2.0 | 4 | nan | | No log | 3.0 | 6 | nan | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_adapter_model
Lajonbot
2023-07-28T07:12:06Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-28T07:12:05Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
wayanbrati/llama2-qlora-finetunined-french
wayanbrati
2023-07-28T07:00:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T07:00:11Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
khanhdhq/finetune_vietcuna_3b_qlora_gptdata_e1_lr0.0002
khanhdhq
2023-07-28T06:56:59Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-07-27T10:03:42Z
--- license: bigscience-bloom-rail-1.0 tags: - generated_from_trainer model-index: - name: finetune_vietcuna_3b_qlora_gptdata_e1_lr0.0002 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune_vietcuna_3b_qlora_gptdata_e1_lr0.0002 This model is a fine-tuned version of [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
PhelixZhen/LanCang-gen2-m2
PhelixZhen
2023-07-28T06:39:51Z
4
0
transformers
[ "transformers", "pytorch", "chatglm", "feature-extraction", "custom_code", "region:us" ]
feature-extraction
2023-07-28T05:54:39Z
# LanCang-gen2-m2 ### 为LanCang-AI-assistant项目而微调的模型 LanCang-AI-assistant github项目地址:https://github.com/phelixzhen/LanCang-AI-assistant/tree/main<br /><br /><br /> 基于chatglm2-6b微调的模型。 第二代版本m2,本模型仅支持message命令。 chatglm2-6b:https://huggingface.co/THUDM/chatglm2-6b --- license: apache-2.0 ---
jjohn23/diffuser-anime-faces-32
jjohn23
2023-07-28T06:27:52Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-07-28T06:25:41Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute anime faces. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('jjohn23/diffuser-anime-faces-32') image = pipeline().images[0] image ```
ggul-tiger/negobot-361-weak-term-hangeul
ggul-tiger
2023-07-28T06:20:58Z
0
0
null
[ "negotiation", "ko", "dataset:ggul-tiger/negobot_361_weakcase_injected", "dataset:ggul-tiger/dealing_term", "region:us" ]
null
2023-07-26T03:18:28Z
--- datasets: - ggul-tiger/negobot_361_weakcase_injected - ggul-tiger/dealing_term language: - ko tags: - negotiation ---
Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding
Evan-Lin
2023-07-28T06:19:46Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-28T06:12:29Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp3_ykz16z/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp3_ykz16z/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp3_ykz16z/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
yijhen38/llama2-TinyPixel-qlora-FT-chinese
yijhen38
2023-07-28T06:08:20Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-28T04:00:18Z
--- library_name: peft --- 參考 Fine Turning https://colab.research.google.com/drive/12dVqXZMIVxGI0uutU6HG9RWbWPXL3vts?usp=sharing#scrollTo=mNnkgBq7Q3EU Model:[AlexanderDoria/novel17_test](https://huggingface.co/datasets/AlexanderDoria/novel17_test) dataset:COT_Answer.txt ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
yijhen38/llama2-7B-chat-qlora-FT-chainsea
yijhen38
2023-07-28T05:58:04Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-28T05:15:12Z
--- library_name: peft --- 參考 Fine Turning https://colab.research.google.com/drive/12dVqXZMIVxGI0uutU6HG9RWbWPXL3vts?usp=sharing#scrollTo=mNnkgBq7Q3EU Model:[meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) dataset:COT_Answer.txt ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
beaugogh/pythia-1.4b-deduped-sharegpt
beaugogh
2023-07-28T05:52:54Z
1,613
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-25T13:21:09Z
--- license: apache-2.0 --- pythia-1.4b-deduped model finetuned on sharegpt data
mskhattori/wav2vec2phone-large-xlsr-jp-jdrt5N-demo3
mskhattori
2023-07-28T05:26:20Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:jonatasgrosman/wav2vec2-large-xlsr-53-japanese", "base_model:finetune:jonatasgrosman/wav2vec2-large-xlsr-53-japanese", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-28T04:04:01Z
--- license: apache-2.0 base_model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2phone-large-xlsr-jp-jdrt5N-demo3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2phone-large-xlsr-jp-jdrt5N-demo3 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7083 - Wer: 0.6203 - Cer: 0.5618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.658 | 1.0 | 567 | 1.5635 | 0.8506 | 0.6068 | | 1.5568 | 2.0 | 1134 | 1.3338 | 0.8214 | 0.5796 | | 1.4061 | 3.0 | 1701 | 1.1097 | 0.7830 | 0.5731 | | 1.2577 | 4.0 | 2268 | 0.9200 | 0.7091 | 0.5671 | | 1.2021 | 5.0 | 2835 | 0.8400 | 0.6761 | 0.5700 | | 1.1708 | 6.0 | 3402 | 0.7893 | 0.6536 | 0.5711 | | 1.2189 | 7.0 | 3969 | 0.7496 | 0.6395 | 0.5640 | | 1.1133 | 8.0 | 4536 | 0.7273 | 0.6287 | 0.5674 | | 1.1771 | 9.0 | 5103 | 0.7126 | 0.6213 | 0.5612 | | 1.1452 | 10.0 | 5670 | 0.7083 | 0.6203 | 0.5618 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
IHaveNoClueAndIMustPost/llama2-22b-blocktriangular-GGML
IHaveNoClueAndIMustPost
2023-07-28T05:06:52Z
0
0
null
[ "llama2", "llama", "dataset:togethercomputer/RedPajama-Data-1T-Sample", "region:us" ]
null
2023-07-27T19:27:59Z
--- datasets: - togethercomputer/RedPajama-Data-1T-Sample tags: - llama2 - llama --- A second model merge by [chargoddard](https://huggingface.co/chargoddard). A GGML conversion of the previous merge can be found [here](https://huggingface.co/IHaveNoClueAndIMustPost/Llama-2-22B-GGML).<br> I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.<br><br> Description copied from the [original repo](https://huggingface.co/chargoddard/llama2-22b-blocktriangular) below. <i> Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens. Again, not intended for direct use - meant as a base for further tuning and merging.</i>
Chattiori/LunaMix
Chattiori
2023-07-28T04:37:12Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-27T02:29:11Z
--- license: creativeml-openrail-m --- <span style="font-size: 250%; font-weight:bold; color:#FF8000; -webkit-text-stroke: 1.5px blue;">== LunaMix</span> ***<span style="color:#FF8000;">Beauty like a moonlight</span>*** --- LunaMix is merge model that has beautiful and relistic features. ***<span style="font-size: 150%; color:#0080ff;">This model is merged using [Chattiori's Model Merger](https://github.com/Faildes/merge-models).</span>*** # Use (Box within X is allowed) [X] Selling Images [X] Using/Sharing Model on Generation Service [ ] Selling Model/Merges Using This Model [X] Share Merges Without Author Credits [ ] Merging Without Baking Recipe [X] Different Permission Sharing Merges # Merge Recipe * Sum Twice, [ChillyMix-V2](https://civitai.com/models/58772/chillymix) + [midmix-v2.0](https://civitai.com/models/91837/midmix) + [BlessingMix-V1](https://civitai.com/models/94179/blessing-mix-aka-bracing-evo-mix-clone), rand_alpha(0.2, 0.6, 6319827 ) rand_beta(0.2, 0.65, 2321835) >> TEMP-1A * Sum Twice, [XtReMiXUltiMateMerge-v1.5](https://civitai.com/models/93589/xtremix-ultimate-merge) + [XXMix_9realistic-v4.0](https://civitai.com/models/47274/xxmix9realistic) + [majicMIXrealistic-v6](https://civitai.com/models/43331/majicmix-realistic), rand_alpha(0.1, 0.7, 362481 ) rand_beta(0.4, 0.75, 8163924) >> TEMP-1B * Sum Twice, [Milky-Chicken-v1.1](https://civitai.com/models/91662/milky-chicken) + [MixAnimeRealistic-V1.9F](https://civitai.com/models/88536/mix-anime-realistic-v17) + [AddictiveFuture_Realistic_SemiAsian-V1](https://civitai.com/models/94725/addictivefuturerealisticsemiasian), rand_alpha(0.15, 0.65, 3361475 ) rand_beta(0.2, 0.65, 9143023) >> TEMP-1C * Sum Twice, TEMP-1A + [kisaragi_mix-v2.2](https://civitai.com/models/45757/kisaragimix) + [ShampooMix-v4](https://civitai.com/models/33918/shampoo-mix), rand_alpha(0.1, 0.65, 351723 ) rand_beta(0.25, 0.75, 7153851) >> TEMP-2A * Sum Twice, TEMP-1B + [ThisIsReal-V1](https://civitai.com/models/93529/thisisreal-v1) + [X-FlareMix-Real](https://civitai.com/models/87533/x-flare-mix), rand_alpha(0.15, 0.7, 3197455 ) rand_beta(0.2, 0.6, 672481) >> TEMP-2B * Sum Twice, TEMP-1C + [VivCharMix-v5_NewEra](https://civitai.com/models/29422/vivcharmix) + [KawaiiRealisticAsianMix-v0.3](https://civitai.com/models/83766/kawaii-realistic-asian-mix), rand_alpha(0.1, 0.55, 4615283 ) rand_beta(0.2, 0.6, 7153994) >> TEMP-2C * Sum Twice, TEMP-2A + TEMP-2B + TEMP-2C, rand_alpha(0.1, 0.55, 7315854 ) rand_beta(0.1, 0.55, 6142873) >> LunaMix <p> <img src="https://huggingface.co/Chattiori/LunaMix/resolve/main/ex0.png" width="300"> <img src="https://huggingface.co/Chattiori/LunaMix/resolve/main/ex1.png" width="300"> <img src="https://huggingface.co/Chattiori/LunaMix/resolve/main/ex2.png" width="300"> </p>
YarramsettiNaresh/dqn-SpaceInvadersNoFrameskip-v4
YarramsettiNaresh
2023-07-28T03:28:57Z
10
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-25T12:14:36Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 851.50 +/- 275.82 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YarramsettiNaresh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YarramsettiNaresh -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YarramsettiNaresh ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
PAIXAI/Astrid-7B
PAIXAI
2023-07-28T03:02:52Z
19
22
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "gpt", "llm", "large language model", "PAIX", "custom_code", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T01:56:35Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - PAIX inference: true thumbnail: https://static.wixstatic.com/media/bdee4e_d0af74523fa64a998d4cfb894e8cd3bb~mv2.png/v1/crop/x_40,y_663,w_1954,h_663/fill/w_342,h_116,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/PAIX%20Logo%20(2).png --- # Model Card ## Summary Model Card Summary The model Astrid-7B-1 architecture includes a RWForCausalLM transformer with word embeddings, a module list of 32 DecoderLayers, and a linear lm_head. The DecoderLayer includes an input layer normalization, self-attention mechanism, and a multi-layer perceptron (MLP). It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance. Trained in English, it's a versatile tool for a variety of applications. This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model. This model was trained by [PAIX.Cloud](https://www.paix.cloud/). - Wait list: [Wait List](https://www.paix.cloud/join-waitlist) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="<path_to_local_folder>", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "<path_to_local_folder>", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "<path_to_local_folder>", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "<path_to_local_folder>" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` RWForCausalLM( (transformer): RWModel( (word_embeddings): Embedding(65024, 4544) (h): ModuleList( (0-31): 32 x DecoderLayer( (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) (self_attention): Attention( (maybe_rotary): RotaryEmbedding() (query_key_value): Linear(in_features=4544, out_features=4672, bias=False) (dense): Linear(in_features=4544, out_features=4544, bias=False) (attention_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False) (act): GELU(approximate='none') (dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False) ) ) ) (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=4544, out_features=65024, bias=False) ) ``` ## Model Configuration ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=<path_to_local_folder> --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Jonathaniu/llama2-breast-cancer-13b-knowledge-epoch-10
Jonathaniu
2023-07-28T02:46:15Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-28T02:45:54Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.4.0.dev0
dleiferives/dwane
dleiferives
2023-07-28T02:30:30Z
193
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T02:30:22Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: dwane results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.20000000298023224 --- # dwane Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dwayne johnson ![dwayne johnson](images/dwayne_johnson.jpg) #### dwayne the rock johnson ![dwayne the rock johnson](images/dwayne_the_rock_johnson.jpg)
khnhar/capturingYoutubeVideo
khnhar
2023-07-28T02:22:46Z
0
0
null
[ "region:us" ]
null
2023-07-28T01:58:32Z
Youtube 동영상에서 '찾고 싶은 키워드가 들어간 이미지'를 찾아주는 ai모듈이다. 응용 : CCTV 범죄자 찾기, 도난차량 번호판 조회 등
MichelNivard/starcoderbase_3b_for_R
MichelNivard
2023-07-28T02:22:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-27T12:43:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
zpattdev/ppo-pyramids
zpattdev
2023-07-28T02:08:26Z
15
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-28T02:08:24Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: zpattdev/ppo-pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cgr28/a2c-PandaReachDense-v2
cgr28
2023-07-28T02:06:34Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T02:03:40Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.76 +/- 0.55 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
weiren119/distilhubert-finetuned-finetuned-gtzan
weiren119
2023-07-28T01:39:40Z
159
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:weiren119/distilhubert-finetuned-gtzan", "base_model:finetune:weiren119/distilhubert-finetuned-gtzan", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-28T00:07:04Z
--- license: apache-2.0 base_model: weiren119/distilhubert-finetuned-gtzan tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.88 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-finetuned-gtzan This model is a fine-tuned version of [weiren119/distilhubert-finetuned-gtzan](https://huggingface.co/weiren119/distilhubert-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.8658 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1441 | 1.0 | 57 | 0.5206 | 0.86 | | 0.0992 | 2.0 | 114 | 0.6685 | 0.84 | | 0.0881 | 3.0 | 171 | 0.8925 | 0.76 | | 0.0792 | 4.0 | 228 | 1.0064 | 0.78 | | 0.0045 | 5.0 | 285 | 0.8283 | 0.83 | | 0.0695 | 6.0 | 342 | 0.7731 | 0.86 | | 0.0303 | 7.0 | 399 | 1.0389 | 0.82 | | 0.0026 | 8.0 | 456 | 0.9661 | 0.83 | | 0.0014 | 9.0 | 513 | 0.8917 | 0.87 | | 0.0012 | 10.0 | 570 | 0.8658 | 0.88 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.0 - Tokenizers 0.13.3
Jinmane/jafamix
Jinmane
2023-07-28T01:30:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T00:53:39Z
--- license: creativeml-openrail-m ---
RayGuo2023/distilbert-base-uncased-finetuned-imdb
RayGuo2023
2023-07-28T01:29:26Z
115
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-27T22:56:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7035 | 1.0 | 157 | 2.4963 | | 2.5794 | 2.0 | 314 | 2.4271 | | 2.5358 | 3.0 | 471 | 2.4512 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.0 - Datasets 2.14.1 - Tokenizers 0.13.3
cgr28/a2c-AntBulletEnv-v0
cgr28
2023-07-28T01:09:17Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T01:08:05Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1364.78 +/- 62.29 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
JAdeojo/xlm-roberta-large-lora-consumer-complaints-cfpb
JAdeojo
2023-07-28T01:07:18Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T01:07:15Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
lucas0/empath-falcon-40b
lucas0
2023-07-28T01:04:53Z
0
0
null
[ "generated_from_trainer", "base_model:tiiuae/falcon-40b-instruct", "base_model:finetune:tiiuae/falcon-40b-instruct", "license:apache-2.0", "region:us" ]
null
2023-07-20T18:29:06Z
--- license: apache-2.0 base_model: tiiuae/falcon-40b-instruct tags: - generated_from_trainer model-index: - name: empath-falcon-40b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # empath-falcon-40b This model is a fine-tuned version of [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/bnc-cbt-log-rarity-mixed
NasimB
2023-07-28T01:03:51Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T22:53:51Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: bnc-cbt-log-rarity-mixed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bnc-cbt-log-rarity-mixed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.0803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3678 | 0.29 | 500 | 5.3083 | | 5.0506 | 0.58 | 1000 | 4.8988 | | 4.727 | 0.87 | 1500 | 4.6664 | | 4.4606 | 1.16 | 2000 | 4.5274 | | 4.3096 | 1.45 | 2500 | 4.4089 | | 4.213 | 1.75 | 3000 | 4.3013 | | 4.0894 | 2.04 | 3500 | 4.2301 | | 3.9068 | 2.33 | 4000 | 4.1861 | | 3.8675 | 2.62 | 4500 | 4.1276 | | 3.8433 | 2.91 | 5000 | 4.0819 | | 3.6581 | 3.2 | 5500 | 4.0743 | | 3.5934 | 3.49 | 6000 | 4.0511 | | 3.5814 | 3.78 | 6500 | 4.0203 | | 3.4978 | 4.07 | 7000 | 4.0150 | | 3.326 | 4.36 | 7500 | 4.0140 | | 3.3207 | 4.65 | 8000 | 4.0007 | | 3.308 | 4.94 | 8500 | 3.9894 | | 3.1751 | 5.24 | 9000 | 4.0029 | | 3.144 | 5.53 | 9500 | 4.0021 | | 3.1408 | 5.82 | 10000 | 4.0013 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
thinkKenya/speecht5_finetuned_voxpopuli_sw
thinkKenya
2023-07-28T00:45:38Z
20
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_11_0", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-26T13:49:24Z
--- tags: - generated_from_trainer datasets: - common_voice_11_0 model-index: - name: speecht5_finetuned_voxpopuli_sw results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_sw This model was trained from scratch on the common_voice_11_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 19000 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.0 - Tokenizers 0.13.3
ai-maker-space/instruct-tuned-llama-7b-hf-alpaca_gpt_4_5_000_samples
ai-maker-space
2023-07-28T00:25:58Z
7
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:c-s-ale/alpaca-gpt4-data", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T23:10:26Z
--- datasets: - c-s-ale/alpaca-gpt4-data language: - en --- # Model Card for Model ID This is a (slightly) instruct-tuned LLaMA 2 (7B)! This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description Using 5,000 samples of the Alpaca-GPT4 dataset - **Developed by:** [Chris Alexiuk](https://www.linkedin.com/in/csalexiuk/) - **Model type:** Instruct-tuned - **Language(s) (NLP):** English - **License:** [LLamA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE) - **Finetuned from model:** [llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) ## Uses See [Meta's LLaMA 2 Model Card](https://huggingface.co/meta-llama/Llama-2-7b-hf#intended-use)
chandrasutrisnotjhong/Reinforce-CartPole
chandrasutrisnotjhong
2023-07-27T23:59:26Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T11:16:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
brunoboat/Reinforce-Pixelcopter-PLE-v0
brunoboat
2023-07-27T23:43:45Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T23:08:18Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 13.00 +/- 11.42 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Emmy1/Mine
Emmy1
2023-07-27T23:33:25Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-07-27T23:33:25Z
--- license: bigscience-openrail-m ---
brunoboat/Reinforce-CartPole-v1
brunoboat
2023-07-27T23:19:15Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T23:18:39Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
digitaljungle/reinforce-cart-01
digitaljungle
2023-07-27T23:09:17Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T23:09:07Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-cart-01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
donskerclass/bert-finetuned-ner
donskerclass
2023-07-27T22:56:34Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-27T22:44:23Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9338077114016217 - name: Recall type: recall value: 0.9496802423426456 - name: F1 type: f1 value: 0.941677096370463 - name: Accuracy type: accuracy value: 0.986504385706717 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0597 - Precision: 0.9338 - Recall: 0.9497 - F1: 0.9417 - Accuracy: 0.9865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0784 | 1.0 | 1756 | 0.0742 | 0.9053 | 0.9302 | 0.9176 | 0.9793 | | 0.0404 | 2.0 | 3512 | 0.0583 | 0.9294 | 0.9485 | 0.9389 | 0.9859 | | 0.0255 | 3.0 | 5268 | 0.0597 | 0.9338 | 0.9497 | 0.9417 | 0.9865 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
JeffGuo/llama2-qlora-finetunined-french
JeffGuo
2023-07-27T22:38:26Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-27T22:38:09Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
brunoboat/Reinforce-v1
brunoboat
2023-07-27T22:30:50Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T22:30:40Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
NasimB/bnc-cbt-rarity-mixed
NasimB
2023-07-27T22:29:27Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T20:11:40Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: bnc-cbt-rarity-mixed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bnc-cbt-rarity-mixed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.0594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3665 | 0.29 | 500 | 5.3115 | | 5.0498 | 0.58 | 1000 | 4.9024 | | 4.7317 | 0.87 | 1500 | 4.6646 | | 4.4609 | 1.16 | 2000 | 4.5206 | | 4.3169 | 1.45 | 2500 | 4.4078 | | 4.2185 | 1.75 | 3000 | 4.3041 | | 4.0998 | 2.04 | 3500 | 4.2236 | | 3.9145 | 2.33 | 4000 | 4.1835 | | 3.8745 | 2.62 | 4500 | 4.1284 | | 3.8531 | 2.91 | 5000 | 4.0763 | | 3.6685 | 3.2 | 5500 | 4.0703 | | 3.598 | 3.49 | 6000 | 4.0412 | | 3.5813 | 3.78 | 6500 | 4.0117 | | 3.5089 | 4.07 | 7000 | 4.0030 | | 3.3334 | 4.36 | 7500 | 3.9999 | | 3.3274 | 4.65 | 8000 | 3.9871 | | 3.317 | 4.94 | 8500 | 3.9757 | | 3.1751 | 5.24 | 9000 | 3.9865 | | 3.1414 | 5.53 | 9500 | 3.9854 | | 3.1567 | 5.82 | 10000 | 3.9837 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
cgr28/dqn-SpaceInvadersNoFrameskip-v4
cgr28
2023-07-27T22:27:26Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T22:26:47Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 606.50 +/- 92.39 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cgr28 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cgr28 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cgr28 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
zacdennis/gradientascent
zacdennis
2023-07-27T22:16:30Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T22:16:26Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: gradientascent results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 109.50 +/- 14.23 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jieqiyu/lora
jieqiyu
2023-07-27T22:13:45Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-27T22:13:40Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
zpattdev/Reinforce-pixelcopterV0
zpattdev
2023-07-27T21:56:00Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-26T17:49:36Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopterV0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 60.30 +/- 47.86 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction