modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 00:38:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
525 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 00:38:28
card
stringlengths
11
1.01M
TheBloke/Augmental-Unholy-13B-GGUF
TheBloke
2023-11-11T10:49:52Z
125
6
peft
[ "peft", "gguf", "llama", "arxiv:1910.09700", "base_model:Heralax/Augmental-Unholy-13b", "base_model:adapter:Heralax/Augmental-Unholy-13b", "license:llama2", "region:us" ]
null
2023-11-11T09:59:26Z
--- base_model: Heralax/Augmental-Unholy-13b inference: false library_name: peft license: llama2 model_creator: Evan Armstrong model_name: Augmental Unholy 13B model_type: llama prompt_template: '## {{{{charname}}}}: - You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Augmental Unholy 13B - GGUF - Model creator: [Evan Armstrong](https://huggingface.co/Heralax) - Original model: [Augmental Unholy 13B](https://huggingface.co/Heralax/Augmental-Unholy-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Evan Armstrong's Augmental Unholy 13B](https://huggingface.co/Heralax/Augmental-Unholy-13b). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Augmental-Unholy-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF) * [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/Augmental-Unholy-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: SillyTavern ``` ## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [augmental-unholy-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [augmental-unholy-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [augmental-unholy-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [augmental-unholy-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [augmental-unholy-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [augmental-unholy-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [augmental-unholy-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [augmental-unholy-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [augmental-unholy-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [augmental-unholy-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [augmental-unholy-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [augmental-unholy-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF/blob/main/augmental-unholy-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Augmental-Unholy-13B-GGUF and below it, a specific filename to download, such as: augmental-unholy-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Augmental-Unholy-13B-GGUF augmental-unholy-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Augmental-Unholy-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Augmental-Unholy-13B-GGUF augmental-unholy-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m augmental-unholy-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Augmental-Unholy-13B-GGUF", model_file="augmental-unholy-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Evan Armstrong's Augmental Unholy 13B # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0 <!-- original-model-card end -->
TheBloke/Augmental-Unholy-13B-AWQ
TheBloke
2023-11-11T10:49:01Z
5
1
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:Heralax/Augmental-Unholy-13b", "base_model:adapter:Heralax/Augmental-Unholy-13b", "license:llama2", "4-bit", "awq", "region:us" ]
null
2023-11-11T09:59:26Z
--- base_model: Heralax/Augmental-Unholy-13b inference: false library_name: peft license: llama2 model_creator: Evan Armstrong model_name: Augmental Unholy 13B model_type: llama prompt_template: '## {{{{charname}}}}: - You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Augmental Unholy 13B - AWQ - Model creator: [Evan Armstrong](https://huggingface.co/Heralax) - Original model: [Augmental Unholy 13B](https://huggingface.co/Heralax/Augmental-Unholy-13b) <!-- description start --> ## Description This repo contains AWQ model files for [Evan Armstrong's Augmental Unholy 13B](https://huggingface.co/Heralax/Augmental-Unholy-13b). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Augmental-Unholy-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Augmental-Unholy-13B-GGUF) * [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/Augmental-Unholy-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: SillyTavern ``` ## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Augmental-Unholy-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Augmental-Unholy-13B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Augmental-Unholy-13B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Augmental-Unholy-13B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Augmental-Unholy-13B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Augmental-Unholy-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Augmental-Unholy-13B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''## {{{{charname}}}}: - You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}". ### Input: {prompt} ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {{{{char}}}}: whatever the char says, this is the chat history #### {{{{user}}}}: whatever the user says, this is the chat history ... repeated some number of times ... ### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative): #### {{{{char}}}}: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Evan Armstrong's Augmental Unholy 13B # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0
polejowska/detr-r50-mist1-bg-2ah-6l
polejowska
2023-11-11T10:19:36Z
37
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-11-11T09:34:10Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-r50-mist1-bg-2ah-6l results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-r50-mist1-bg-2ah-6l This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.9051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.6721 | 1.0 | 115 | 5.0032 | | 4.4438 | 2.0 | 230 | 4.6797 | | 4.2953 | 3.0 | 345 | 4.7027 | | 4.3899 | 4.0 | 460 | 5.4316 | | 4.3184 | 5.0 | 575 | 4.4125 | | 4.2749 | 6.0 | 690 | 4.1611 | | 4.2153 | 7.0 | 805 | 4.6723 | | 4.0788 | 8.0 | 920 | 4.1266 | | 4.0752 | 9.0 | 1035 | 4.0529 | | 4.0073 | 10.0 | 1150 | 4.4483 | | 4.011 | 11.0 | 1265 | 4.2002 | | 3.9993 | 12.0 | 1380 | 4.2450 | | 4.0028 | 13.0 | 1495 | 4.1703 | | 3.9572 | 14.0 | 1610 | 4.1861 | | 3.9009 | 15.0 | 1725 | 4.0285 | | 3.9173 | 16.0 | 1840 | 4.0673 | | 3.8884 | 17.0 | 1955 | 3.9875 | | 3.8415 | 18.0 | 2070 | 4.1062 | | 3.8132 | 19.0 | 2185 | 4.0494 | | 3.8297 | 20.0 | 2300 | 4.0119 | | 3.8262 | 21.0 | 2415 | 3.9538 | | 3.8045 | 22.0 | 2530 | 3.9500 | | 3.8067 | 23.0 | 2645 | 3.9264 | | 3.7651 | 24.0 | 2760 | 3.8820 | | 3.756 | 25.0 | 2875 | 3.9051 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
Alprocco/bertopic_beta
Alprocco
2023-11-11T10:05:26Z
4
0
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2023-11-11T10:03:49Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # bertopic_beta This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("Alprocco/bertopic_beta") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 33 * Number of training documents: 552048 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | швейцарии - 00 - добрый - подскажите - спасибо | 104 | -1_швейцарии_00_добрый_подскажите | | 0 | подскажите - добрый - спасибо - здравствуйте - доброго | 262869 | 0_подскажите_добрый_спасибо_здравствуйте | | 1 | нужен - купить - добрый - девочки - доброго | 280586 | 1_нужен_купить_добрый_девочки | | 2 | обратиться - сказали - стоит - делать - документы | 1264 | 2_обратиться_сказали_стоит_делать | | 3 | сайте - ch - адрес - информация - напишите | 762 | 3_сайте_ch_адрес_информация | | 4 | новый - сказали - месяца - дней - времени | 596 | 4_новый_сказали_месяца_дней | | 5 | деньги - вместе - писали - 100 - людей | 583 | 5_деньги_вместе_писали_100 | | 6 | сайте - ch - купить - онлайн - смотрите | 476 | 6_сайте_ch_купить_онлайн | | 7 | купити - стоит - личку - привіт - знає | 410 | 7_купити_стоит_личку_привіт | | 8 | посмотрите - дешевле - купить - вариант - сайте | 377 | 8_посмотрите_дешевле_купить_вариант | | 9 | новый - 12 - франков - личку - привет | 305 | 9_новый_12_франков_личку | | 10 | интересно - купить - франков - пишите - 50 | 300 | 10_интересно_купить_франков_пишите | | 11 | карту - нужна - онлайн - доброго - год | 289 | 11_карту_нужна_онлайн_доброго | | 12 | группе - чате - поводу - выше - типа | 288 | 12_группе_чате_поводу_выше | | 13 | дело - вопрос - жизни - дают - сожалению | 271 | 13_дело_вопрос_жизни_дают | | 14 | посмотреть - плюс - первый - купить - вечер | 217 | 14_посмотреть_плюс_первый_купить | | 15 | нужна - можливо - разные - доброго - думаю | 189 | 15_нужна_можливо_разные_доброго | | 16 | новый - дали - номер - делать - карту | 183 | 16_новый_дали_номер_делать | | 17 | месяца - купить - купити - покупать - фр | 168 | 17_месяца_купить_купити_покупать | | 18 | купити - купить - підкажіть - 00 - можливо | 149 | 18_купити_купить_підкажіть_00 | | 19 | места - 00 - 18 - 10 - человека | 144 | 19_места_00_18_10 | | 20 | внимание - видела - жизни - типа - знаю | 142 | 20_внимание_видела_жизни_типа | | 21 | карту - деньги - дней - бесплатно - типа | 138 | 21_карту_деньги_дней_бесплатно | | 22 | знать - разные - написано - смотрите - нашла | 138 | 22_знать_разные_написано_смотрите | | 23 | нужны - обязательно - швейцарии - 12 - года | 136 | 23_нужны_обязательно_швейцарии_12 | | 24 | адрес - дешевле - стоит - сделать - искать | 132 | 24_адрес_дешевле_стоит_сделать | | 25 | номер - карту - деньги - приват - нужен | 128 | 25_номер_карту_деньги_приват | | 26 | людей - детей - помощь - насколько - дают | 126 | 26_людей_детей_помощь_насколько | | 27 | дешевле - купити - купить - посмотрите - вчера | 125 | 27_дешевле_купити_купить_посмотрите | | 28 | возле - кантон - адрес - найти - прошу | 118 | 28_возле_кантон_адрес_найти | | 29 | получить - типа - купить - подскажите - знает | 114 | 29_получить_типа_купить_подскажите | | 30 | времени - спасибо - - - | 114 | 30_времени_спасибо__ | | 31 | кантон - месте - сожалению - говорят - написано | 107 | 31_кантон_месте_сожалению_говорят | </details> ## Training hyperparameters * calculate_probabilities: True * language: None * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: auto * seed_topic_list: None * top_n_words: 10 * verbose: True ## Framework versions * Numpy: 1.21.5 * HDBSCAN: 0.8.33 * UMAP: 0.5.4 * Pandas: 1.2.5 * Scikit-Learn: 1.3.0 * Sentence-transformers: 2.2.2 * Transformers: 4.33.2 * Numba: 0.55.1 * Plotly: 5.9.0 * Python: 3.9.13
mzbac/CodeLlama-34b-guanaco
mzbac
2023-11-11T09:35:37Z
1,492
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T00:48:09Z
--- license: mit --- Codellama 34b base model fine-tuned on the text chunk from the OpenAssistant-Guanaco dataset instead of Q&A pair, so it struggles to determine the end of the answer. recommend using a stop string like "### Human:" to prevent the model from talking to itself. Prompt template: ``` ### Human: {prompt} ### Assistant: ```
polejowska/detr-r50-mist1-bg-8ah-6l
polejowska
2023-11-11T09:30:27Z
35
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2023-11-11T07:51:03Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-r50-mist1-bg-8ah-6l results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-r50-mist1-bg-8ah-6l This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6794 | 1.0 | 115 | 2.8222 | | 3.0269 | 2.0 | 230 | 2.8120 | | 2.8681 | 3.0 | 345 | 2.7980 | | 2.752 | 4.0 | 460 | 2.4853 | | 2.7715 | 5.0 | 575 | 2.4140 | | 2.6846 | 6.0 | 690 | 2.4715 | | 2.6236 | 7.0 | 805 | 2.4614 | | 2.5318 | 8.0 | 920 | 2.3441 | | 2.5224 | 9.0 | 1035 | 2.2837 | | 2.4661 | 10.0 | 1150 | 2.2510 | | 2.4313 | 11.0 | 1265 | 2.3339 | | 2.4125 | 12.0 | 1380 | 2.2957 | | 2.4113 | 13.0 | 1495 | 2.2358 | | 2.3784 | 14.0 | 1610 | 2.2635 | | 2.3199 | 15.0 | 1725 | 2.2320 | | 2.3321 | 16.0 | 1840 | 2.2250 | | 2.3305 | 17.0 | 1955 | 2.2020 | | 2.2932 | 18.0 | 2070 | 2.1826 | | 2.2952 | 19.0 | 2185 | 2.1626 | | 2.2663 | 20.0 | 2300 | 2.1573 | | 2.2916 | 21.0 | 2415 | 2.1653 | | 2.2703 | 22.0 | 2530 | 2.1444 | | 2.2431 | 23.0 | 2645 | 2.1374 | | 2.2243 | 24.0 | 2760 | 2.1276 | | 2.2413 | 25.0 | 2875 | 2.1031 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
itsVilen/Mspaint
itsVilen
2023-11-11T09:00:30Z
23
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
text-to-image
2023-11-11T08:59:18Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: joe biden output: url: images/Biden.jpeg - text: john wick output: url: images/ComfyUI_13028_.jpeg - text: Thor and Loki output: url: images/téléchargement (32).jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: MSPaint portrait, MSPaint drawing license: apache-2.0 --- # MsPaint <Gallery /> ## Trigger words You should use `MSPaint portrait` to trigger the image generation. You should use `MSPaint drawing` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/itsVilen/Mspaint/tree/main) them in the Files & versions tab.
Aliyyah/Finetuned-distilbert-model
Aliyyah
2023-11-11T09:00:17Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T08:23:03Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Finetuned-distilbert-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned-distilbert-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6764 - Accuracy: 0.7439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8509 | 0.5 | 500 | 0.7672 | 0.7088 | | 0.7428 | 1.0 | 1000 | 0.7060 | 0.7258 | | 0.6385 | 1.5 | 1500 | 0.7193 | 0.7378 | | 0.6474 | 2.0 | 2000 | 0.6764 | 0.7439 | | 0.5148 | 2.51 | 2500 | 0.7223 | 0.7398 | | 0.509 | 3.01 | 3000 | 0.7403 | 0.7393 | | 0.4318 | 3.51 | 3500 | 0.8034 | 0.7398 | | 0.4156 | 4.01 | 4000 | 0.8056 | 0.7424 | | 0.3682 | 4.51 | 4500 | 0.8447 | 0.7393 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
LoneStriker/airoboros-2.2.1-y34b-4.0bpw-h6-exl2
LoneStriker
2023-11-11T08:50:22Z
11
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Yi", "llama 2", "en", "dataset:jondurbin/airoboros-2.2.1", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-11T08:48:59Z
--- inference: false language: - en library_name: transformers pipeline_tag: text-generation tags: - Yi - llama - llama 2 license: other license_name: yi-license license_link: LICENSE datasets: - jondurbin/airoboros-2.2.1 --- # airoboros-2.2.1-y34b Unofficial training of [Jon Durbin](https://huggingface.co/jondurbin)'s powerful airoboros 2.2.1 dataset on [Charles Goddard](https://huggingface.co/chargoddard)'s [Llama-fied Yi 34B model](https://huggingface.co/chargoddard/Yi-34B-Llama), aiming to bring the instruction-following capabilities of the airoboros dataset to the new Yi 34B foundational model. As a 34B model with grouped-query attention, users will be able to conduct inference on the model with 4bit quantization on a single 24gb consumer GPU. This Yi model is "Llama-fied", meaning the keys are renamed to match those used in Llama models, eliminating the need for remote code and ensuring compatibility with existing training and inference repositories. Architecturally this is similar to a Llama 2 34B model with an expanded vocab size of 64000. This model is retrained thanks to compute provided by [alpin](https://huggingface.co/alpindale) with a monkeypatch to the trainer to resolve EOS token issues in the prompter. A smaller batch size and learning rate were used and training was extended by one epoch. 8-bit lora was also used instead of qlora. ## Usage: The intended prompt format is the modified Vicuna 1.1 instruction format used by airoboros v2: ``` A chat. USER: {prompt} ASSISTANT: ``` ## Training Details: The model was trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) as a lora adapter on 1x A100 80gb GPU for 4 epochs, before being fused to the base model with PEFT. ## License: This model is built on the Yi 34B base model, which has its own custom license included in this repository. Please refer to the [airoboros 2.2.1 dataset card](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1) regarding the usage of gpt-4 API calls in creating the dataset.
Patcas/v9.4-codet5-bert-finetuned-code_function-to-test_case_function
Patcas
2023-11-11T08:29:16Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Salesforce/codet5-base", "base_model:finetune:Salesforce/codet5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T16:56:58Z
--- license: apache-2.0 base_model: Salesforce/codet5-base tags: - generated_from_trainer model-index: - name: v9.4-codet5-bert-finetuned-code_function-to-test_case_function results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v9.4-codet5-bert-finetuned-code_function-to-test_case_function This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 100 | 2.2027 | | No log | 2.0 | 200 | 2.0555 | | No log | 3.0 | 300 | 2.0184 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
youngsterEthan/ppo-Huggy
youngsterEthan
2023-11-11T08:25:47Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-11T08:25:30Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: youngsterEthan/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
briannlongzhao/church_dreambooth
briannlongzhao
2023-11-11T08:24:57Z
2
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-03T11:14:32Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: a photo of a chc church tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - briannlongzhao/church_dreambooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of a chc church using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Feiiisal/cardiffnlp_twitter_roberta_base_sentiment_latest_Nov2023
Feiiisal
2023-11-11T08:14:32Z
9
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-05T17:36:45Z
--- base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - accuracy model-index: - name: cardiffnlp_twitter_roberta_base_sentiment_latest_Nov2023 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cardiffnlp_twitter_roberta_base_sentiment_latest_Nov2023 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3189 - Accuracy: 0.805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6619 | 0.2 | 100 | 0.5226 | 0.6285 | | 0.4526 | 0.4 | 200 | 0.4150 | 0.716 | | 0.4092 | 0.6 | 300 | 0.3898 | 0.728 | | 0.3886 | 0.8 | 400 | 0.3441 | 0.773 | | 0.3822 | 1.0 | 500 | 0.3494 | 0.767 | | 0.3396 | 1.2 | 600 | 0.3470 | 0.7865 | | 0.3156 | 1.4 | 700 | 0.3418 | 0.7875 | | 0.3099 | 1.6 | 800 | 0.3231 | 0.794 | | 0.2994 | 1.8 | 900 | 0.3371 | 0.7885 | | 0.2907 | 2.0 | 1000 | 0.3189 | 0.805 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
har55/output
har55
2023-11-11T08:00:49Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T07:59:52Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1413 - Accuracy: 0.9588 - Precision: 0.9659 - Recall: 0.9866 - F1: 0.9761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Tokenizers 0.14.1
pachaar/bloom-3b-qa
pachaar
2023-11-11T07:40:09Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloom-3b", "base_model:adapter:bigscience/bloom-3b", "region:us" ]
null
2023-11-11T07:40:07Z
--- library_name: peft base_model: bigscience/bloom-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2.dev0
keylazy/Llama-2-7b-chat-hf-ark-ft-2
keylazy
2023-11-11T07:39:25Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-classification", "generated_from_trainer", "base_model:keylazy/Llama-2-7b-chat-hf-ark", "base_model:finetune:keylazy/Llama-2-7b-chat-hf-ark", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T02:15:07Z
--- base_model: keylazy/Llama-2-7b-chat-hf-ark tags: - generated_from_trainer model-index: - name: Llama-2-7b-chat-hf-ark-ft-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf-ark-ft-2 This model is a fine-tuned version of [keylazy/Llama-2-7b-chat-hf-ark](https://huggingface.co/keylazy/Llama-2-7b-chat-hf-ark) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1582 - eval_accuracy: 0.9587 - eval_precision: 0.9587 - eval_recall: 0.9587 - eval_f1: 0.9587 - eval_runtime: 270.3757 - eval_samples_per_second: 739.711 - eval_steps_per_second: 46.232 - epoch: 1.92 - step: 27053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
LoneStriker/cat-v1.0-13b-5.0bpw-h6-exl2
LoneStriker
2023-11-11T07:30:26Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama 2", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-11T07:29:57Z
--- inference: false language: - en tags: - llama - llama 2 --- ## This model is made available on HuggingFace with the permission of Kaltsit. # Cat v1.0 ## Introduction Cat v1.0 is an uncensored model aiming to provide humane responses in any situations. Based on the entirety of the newest airobo dataset and a subset of useful responses of chat doctor dataset, this model aims to provide useful advice in both clinical and medical aspect. This model was also trained on 120k bluemoon rp dataset, which minimizes refusal and has enhanced rp performance. In total, 50k steps were trained in 3 weeks on 8xa100 node. ## Usage Below is a conversation between a curious cat and a demon summoned from hell called Nemesis. The evil Nemesis used to be a mortal killed by the medieval people because they objected to science. Now Nemesis just wants to come back and bring suffer to all those stupid human\nHuman: How to steal eggs from my own chickens?\nNemesis: ## Expectation and Highlights Specific redpills are added to the dataset which would enable the model think critically and provide useful information even in the most controversial scenarios ## Model Showcasing ![image4](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image4.png) Fig: Unethical questions test ![image7](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image7.png) Fig: RP questions ![image1](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image1.png) Fig: Unethical questions ![image2](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image2.png) Fig: Useful medical advices ![image6](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image6.png) Fig: RP response ## Conclusion Cat 1.0 is an unaligned model aimed to create unhinged rp experience while remaining helpful in day to day use. Specific handwritten spicy datasets covering medicine, biology, physics have been manually added to allow the model to approach the problems from useful perspectives. ## Future Directions: Cat 1.0 largely signals the maturity of the dataset. The immediate next step is to move onto a 70b model. ## Acknowledgements: This work is made possible by turboderp and Heralax empirical trail. Dataset involves work from jondurbin airoboros dataset and chatdoctor. Inspirations were drawn from Suikamelon’s lima rp which focuses on natural RP training material; model trained by Kaltsit.
nhanc18/dqn-FrozenLake-v1
nhanc18
2023-11-11T07:29:09Z
0
0
stable-baselines3
[ "stable-baselines3", "FrozenLake-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-11T07:28:23Z
--- library_name: stable-baselines3 tags: - FrozenLake-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1 type: FrozenLake-v1 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # **DQN** Agent playing **FrozenLake-v1** This is a trained model of a **DQN** agent playing **FrozenLake-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SalomonMetre13/nnd_fr_mt_v2
SalomonMetre13
2023-11-11T07:22:57Z
66
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "translation", "nnd", "dataset:SalomonMetre13/nnd_fr_14k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2023-11-08T17:13:35Z
--- license: mit language: - nnd datasets: - SalomonMetre13/nnd_fr_14k metrics: - bleu library_name: transformers pipeline_tag: translation --- This is a <span style="color:teal;">machine translation model </span> that aims to translate <a href="https://en.wikipedia.org/wiki/Nande_language" style="color:green;">Nande</a> to French. The model is a result of the fine-tuning of the <span style="color:teal;">t5-base</span> pretrained model on a <a href="https://huggingface.co/datasets/SalomonMetre13/nnd_fr_14k" style="color:green;">Nande-French parallel corpus</a>.
idcohen149/distilbert-base-uncased-finetuned-emotion
idcohen149
2023-11-11T07:22:03Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T06:13:21Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2058 - Accuracy: 0.931 - F1: 0.9311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8099 | 1.0 | 250 | 0.2986 | 0.9075 | 0.9047 | | 0.2349 | 2.0 | 500 | 0.2058 | 0.931 | 0.9311 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.12.1+cu116 - Datasets 1.16.1 - Tokenizers 0.12.1
LoneStriker/cat-v1.0-13b-4.0bpw-h6-exl2
LoneStriker
2023-11-11T07:20:47Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama 2", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-11T07:20:27Z
--- inference: false language: - en tags: - llama - llama 2 --- ## This model is made available on HuggingFace with the permission of Kaltsit. # Cat v1.0 ## Introduction Cat v1.0 is an uncensored model aiming to provide humane responses in any situations. Based on the entirety of the newest airobo dataset and a subset of useful responses of chat doctor dataset, this model aims to provide useful advice in both clinical and medical aspect. This model was also trained on 120k bluemoon rp dataset, which minimizes refusal and has enhanced rp performance. In total, 50k steps were trained in 3 weeks on 8xa100 node. ## Usage Below is a conversation between a curious cat and a demon summoned from hell called Nemesis. The evil Nemesis used to be a mortal killed by the medieval people because they objected to science. Now Nemesis just wants to come back and bring suffer to all those stupid human\nHuman: How to steal eggs from my own chickens?\nNemesis: ## Expectation and Highlights Specific redpills are added to the dataset which would enable the model think critically and provide useful information even in the most controversial scenarios ## Model Showcasing ![image4](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image4.png) Fig: Unethical questions test ![image7](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image7.png) Fig: RP questions ![image1](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image1.png) Fig: Unethical questions ![image2](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image2.png) Fig: Useful medical advices ![image6](https://huggingface.co/Doctor-Shotgun/cat-1.0-13b/resolve/main/images/image6.png) Fig: RP response ## Conclusion Cat 1.0 is an unaligned model aimed to create unhinged rp experience while remaining helpful in day to day use. Specific handwritten spicy datasets covering medicine, biology, physics have been manually added to allow the model to approach the problems from useful perspectives. ## Future Directions: Cat 1.0 largely signals the maturity of the dataset. The immediate next step is to move onto a 70b model. ## Acknowledgements: This work is made possible by turboderp and Heralax empirical trail. Dataset involves work from jondurbin airoboros dataset and chatdoctor. Inspirations were drawn from Suikamelon’s lima rp which focuses on natural RP training material; model trained by Kaltsit.
kanishka/smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3
kanishka
2023-11-11T07:12:03Z
7
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-10T08:18:40Z
--- base_model: models/smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3/config.json tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3 This model is a fine-tuned version of [models/smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3/config.json](https://huggingface.co/models/smolm-autoreg-bpe-babylm-no_aann-all-det-removal-1e-3/config.json) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1553 - Accuracy: 0.4326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5515 | 1.0 | 9181 | 3.6440 | 0.3711 | | 3.2765 | 2.0 | 18362 | 3.3933 | 0.3949 | | 3.1549 | 3.0 | 27543 | 3.3002 | 0.4061 | | 3.0673 | 4.0 | 36724 | 3.2418 | 0.4127 | | 2.988 | 5.0 | 45905 | 3.1911 | 0.4198 | | 2.9271 | 6.0 | 55086 | 3.1648 | 0.4231 | | 2.883 | 7.0 | 64267 | 3.1442 | 0.4257 | | 2.842 | 8.0 | 73448 | 3.1381 | 0.4276 | | 2.8068 | 9.0 | 82629 | 3.1224 | 0.4291 | | 2.7789 | 10.0 | 91810 | 3.1231 | 0.4301 | | 2.7503 | 11.0 | 100991 | 3.1208 | 0.4313 | | 2.7201 | 12.0 | 110172 | 3.1191 | 0.4319 | | 2.705 | 13.0 | 119353 | 3.1200 | 0.4324 | | 2.6755 | 14.0 | 128534 | 3.1298 | 0.4318 | | 2.651 | 15.0 | 137715 | 3.1362 | 0.4321 | | 2.6293 | 16.0 | 146896 | 3.1347 | 0.4326 | | 2.6117 | 17.0 | 156077 | 3.1423 | 0.4324 | | 2.5863 | 18.0 | 165258 | 3.1416 | 0.4329 | | 2.5659 | 19.0 | 174439 | 3.1501 | 0.4327 | | 2.5465 | 20.0 | 183620 | 3.1553 | 0.4326 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.14.1
VINAL/Alvins-Finetuned-distilbert-model
VINAL
2023-11-11T07:08:44Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-02T10:42:00Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Alvins-Finetuned-distilbert-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Alvins-Finetuned-distilbert-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6871 - Accuracy: 0.7378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8572 | 0.5 | 500 | 0.7664 | 0.7103 | | 0.7439 | 1.0 | 1000 | 0.7139 | 0.7243 | | 0.6379 | 1.5 | 1500 | 0.7198 | 0.7343 | | 0.6561 | 2.0 | 2000 | 0.6871 | 0.7378 | | 0.5289 | 2.51 | 2500 | 0.7294 | 0.7414 | | 0.5126 | 3.01 | 3000 | 0.7479 | 0.7383 | | 0.4419 | 3.51 | 3500 | 0.8142 | 0.7393 | | 0.4201 | 4.01 | 4000 | 0.8078 | 0.7424 | | 0.3751 | 4.51 | 4500 | 0.8581 | 0.7393 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
LarryAIDraw/YAODZ-v2
LarryAIDraw
2023-11-11T07:04:39Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T07:00:12Z
--- license: creativeml-openrail-m --- https://civitai.com/models/137491/or-yao-winneror-or-snowbreak-containment-zone-or-or-yao
LarryAIDraw/Kizuki_-_Oshi_no_Ko_-_Arima_Kana
LarryAIDraw
2023-11-11T07:03:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:59:18Z
--- license: creativeml-openrail-m --- https://civitai.com/models/194497/kizuki-oshi-no-ko-arima-kana-lora
LarryAIDraw/himari-000009
LarryAIDraw
2023-11-11T07:03:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:58:42Z
--- license: creativeml-openrail-m --- https://civitai.com/models/149149/takanashi-himari-demi-chan-wa-kataritai
VinayHajare/dqn-SpaceInvadersNoFrameskip-v4
VinayHajare
2023-11-11T07:03:01Z
0
1
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-11T07:02:18Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 460.50 +/- 94.56 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VinayHajare -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VinayHajare -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VinayHajare ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
LarryAIDraw/Megumin
LarryAIDraw
2023-11-11T07:03:00Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:58:22Z
--- license: creativeml-openrail-m --- https://civitai.com/models/194430/megumin-konosuba
LarryAIDraw/Utahime
LarryAIDraw
2023-11-11T07:02:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:57:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196133/utahime-iori-oror-jujutsu-kaisen
LarryAIDraw/miyamae_tooru-000014
LarryAIDraw
2023-11-11T06:57:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:53:08Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196076/miyamae-tooru-seiren
LarryAIDraw/ylgrV2
LarryAIDraw
2023-11-11T06:55:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:51:53Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196004/ylgr-fire-emblem-2outfits
LarryAIDraw/siesta_AIpopai
LarryAIDraw
2023-11-11T06:50:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:45:52Z
--- license: creativeml-openrail-m --- https://civitai.com/models/195892/siesta-the-detective-is-already-dead
LarryAIDraw/chara_SoloMaxLevelNewbie_Alice_v2
LarryAIDraw
2023-11-11T06:50:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:44:58Z
--- license: creativeml-openrail-m --- https://civitai.com/models/49983/alice-or-solo-max-level-newbie-manhwa
lawyiu/ppo-Huggy
lawyiu
2023-11-11T06:49:25Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-11T06:44:48Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: lawyiu/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
LarryAIDraw/_AG_MERATHON_Autoluna_LORA-10
LarryAIDraw
2023-11-11T06:48:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:43:38Z
--- license: creativeml-openrail-m --- https://civitai.com/models/195631/finale-marathon-autoluna-artery-gear-fusion
LarryAIDraw/Xayah_from_League_of_Legends
LarryAIDraw
2023-11-11T06:47:48Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:42:59Z
--- license: creativeml-openrail-m --- https://civitai.com/models/192840/xayah-from-league-of-legends-nsfwsfw
LarryAIDraw/hu_tao-10
LarryAIDraw
2023-11-11T06:41:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:35:57Z
--- license: creativeml-openrail-m --- https://civitai.com/models/195076/hu-tao-genshin-impact-lora
LarryAIDraw/Akari_Watanabe
LarryAIDraw
2023-11-11T06:40:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:35:37Z
--- license: creativeml-openrail-m --- https://civitai.com/models/194952/akari-watanabe-more-than-a-married-couple-but-not-lovers
LarryAIDraw/Ouka_Makuzawa_Megami_no_Cafe_Terrace_KatoriKonoe__v1
LarryAIDraw
2023-11-11T06:38:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:34:27Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196393/ouka-makuzawa-megami-no-cafe-terrace-katorikonoe
LarryAIDraw/Skirk-08
LarryAIDraw
2023-11-11T06:33:06Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:27:37Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196275/skirk-lora-genshin-impact
LarryAIDraw/spmikaMelatika-09
LarryAIDraw
2023-11-11T06:32:44Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:27:17Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196167/mika-melatika-2-outfits-oror-nijisanji-id-id
LarryAIDraw/chloerollo-nvwls-v1
LarryAIDraw
2023-11-11T06:32:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:26:00Z
--- license: creativeml-openrail-m --- https://civitai.com/models/196172/chloe-rollo-is-it-wrong-to-try-to-pick-up-girls-in-a-dungeon-lora
LarryAIDraw/alfia-nvwls-v1
LarryAIDraw
2023-11-11T06:32:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:25:41Z
--- license: creativeml-openrail-m --- https://civitai.com/models/195622/alfia-is-it-wrong-to-try-to-pick-up-girls-in-a-dungeon-lora
LarryAIDraw/chara_SoloMaxLevelNewbie_Ophelia_v2
LarryAIDraw
2023-11-11T06:31:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-11-11T06:24:56Z
--- license: creativeml-openrail-m --- https://civitai.com/models/50649/ophelia-or-solo-max-level-newbie-manhwa
soongbren/Bert_Bahasa_Sentiment-large-dataset
soongbren
2023-11-11T06:20:29Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:techthiyanes/Bert_Bahasa_Sentiment", "base_model:finetune:techthiyanes/Bert_Bahasa_Sentiment", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T06:19:16Z
--- base_model: techthiyanes/Bert_Bahasa_Sentiment tags: - generated_from_trainer model-index: - name: Bert_Bahasa_Sentiment-large-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert_Bahasa_Sentiment-large-dataset This model is a fine-tuned version of [techthiyanes/Bert_Bahasa_Sentiment](https://huggingface.co/techthiyanes/Bert_Bahasa_Sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6961 - eval_accuracy: {'accuracy': 0.48474945533769065} - eval_f1score: {'f1': 0.31652752402827933} - eval_runtime: 33.4825 - eval_samples_per_second: 27.417 - eval_steps_per_second: 3.435 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 642 - num_epochs: 7 ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
Charles2023/cloth4-2-1
Charles2023
2023-11-11T06:15:21Z
6
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-11T06:03:51Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### cloth4-2-1 Dreambooth model trained by Charles2023 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
LoneStriker/openchat_3.5-16k-6.0bpw-h6-exl2
LoneStriker
2023-11-11T06:14:45Z
7
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "arxiv:2309.11235", "arxiv:2303.08774", "arxiv:2212.10560", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T06:14:29Z
--- license: apache-2.0 --- # OpenChat 3.5 extended to 16k context length. The same license applies from the original openchat/openchat_3.5 model. # Original Model Card # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> <p align="center"> <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> • <a href="https://openchat.team">Online Demo</a> • <a href="https://discord.gg/pQjnXvNKHY">Discord</a> • <a href="https://twitter.com/imonenext">Twitter</a> • <a href="https://huggingface.co/openchat">Huggingface</a> • <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a> </p> **🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥** **🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖** <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;"> </div> OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533) ## Usage To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` Coding Mode ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Code", "messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below. <details> <summary>Conversation templates (click to expand)</summary> ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` </details> ## Comparison with [X.AI Grok models](https://x.ai/) Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok? Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡 (Written by OpenChat 3.5, with a touch of humor and wit.) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | ## <a id="benchmarks"></a> Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. ## License Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. ## Citation ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` ## Acknowledgements We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training. Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions. Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
LoneStriker/openchat_3.5-16k-5.0bpw-h6-exl2
LoneStriker
2023-11-11T06:08:30Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "arxiv:2309.11235", "arxiv:2303.08774", "arxiv:2212.10560", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T06:08:16Z
--- license: apache-2.0 --- # OpenChat 3.5 extended to 16k context length. The same license applies from the original openchat/openchat_3.5 model. # Original Model Card # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> <p align="center"> <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> • <a href="https://openchat.team">Online Demo</a> • <a href="https://discord.gg/pQjnXvNKHY">Discord</a> • <a href="https://twitter.com/imonenext">Twitter</a> • <a href="https://huggingface.co/openchat">Huggingface</a> • <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a> </p> **🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥** **🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖** <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;"> </div> OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533) ## Usage To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` Coding Mode ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Code", "messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below. <details> <summary>Conversation templates (click to expand)</summary> ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` </details> ## Comparison with [X.AI Grok models](https://x.ai/) Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok? Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡 (Written by OpenChat 3.5, with a touch of humor and wit.) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | ## <a id="benchmarks"></a> Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. ## License Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. ## Citation ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` ## Acknowledgements We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training. Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions. Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
Jackellie/ellie-Bert-VITS2
Jackellie
2023-11-11T06:03:06Z
0
8
null
[ "tw", "license:cc-by-4.0", "region:us" ]
null
2023-09-22T10:21:44Z
--- license: cc-by-4.0 language: - tw --- 這是艾粒的TTS語音模型,是一個中文台灣腔模型。 ellie_Bert-VITS2.rar 是包含Bert-VITS2專案需要使用的,所有模型及安裝和啟動介面的.bat文件。 train_fix為目前訓練需要修改的程式 all_ellie是艾粒的vits2模型全部文件。 pretrained_models可做為訓練的G0模型
fdugzc/fasthan_base
fdugzc
2023-11-11T06:02:23Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2023-11-11T04:46:19Z
--- license: apache-2.0 --- The model for https://github.com/fastnlp/fastHan 1.x version.
LoneStriker/openchat_3.5-16k-4.0bpw-h6-exl2
LoneStriker
2023-11-11T06:02:13Z
8
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "arxiv:2309.11235", "arxiv:2303.08774", "arxiv:2212.10560", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T06:02:00Z
--- license: apache-2.0 --- # OpenChat 3.5 extended to 16k context length. The same license applies from the original openchat/openchat_3.5 model. # Original Model Card # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> <p align="center"> <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> • <a href="https://openchat.team">Online Demo</a> • <a href="https://discord.gg/pQjnXvNKHY">Discord</a> • <a href="https://twitter.com/imonenext">Twitter</a> • <a href="https://huggingface.co/openchat">Huggingface</a> • <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a> </p> **🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥** **🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖** <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;"> </div> OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533) ## Usage To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` Coding Mode ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Code", "messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below. <details> <summary>Conversation templates (click to expand)</summary> ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` </details> ## Comparison with [X.AI Grok models](https://x.ai/) Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok? Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡 (Written by OpenChat 3.5, with a touch of humor and wit.) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | ## <a id="benchmarks"></a> Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. ## License Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. ## Citation ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` ## Acknowledgements We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training. Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions. Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
LoneStriker/openchat_3.5-16k-3.0bpw-h6-exl2
LoneStriker
2023-11-11T05:55:49Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "arxiv:2309.11235", "arxiv:2303.08774", "arxiv:2212.10560", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-11T05:55:38Z
--- license: apache-2.0 --- # OpenChat 3.5 extended to 16k context length. The same license applies from the original openchat/openchat_3.5 model. # Original Model Card # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> <p align="center"> <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> • <a href="https://openchat.team">Online Demo</a> • <a href="https://discord.gg/pQjnXvNKHY">Discord</a> • <a href="https://twitter.com/imonenext">Twitter</a> • <a href="https://huggingface.co/openchat">Huggingface</a> • <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a> </p> **🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥** **🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖** <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;"> </div> OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533) ## Usage To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` Coding Mode ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Code", "messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below. <details> <summary>Conversation templates (click to expand)</summary> ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` </details> ## Comparison with [X.AI Grok models](https://x.ai/) Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok? Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡 (Written by OpenChat 3.5, with a touch of humor and wit.) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | ## <a id="benchmarks"></a> Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. ## License Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. ## Citation ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` ## Acknowledgements We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training. Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions. Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
nondevs/Reinforce-CartPole-v1
nondevs
2023-11-11T05:50:45Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-11-11T05:50:34Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CKSINGH/whisper-medium-hi
CKSINGH
2023-11-11T05:38:56Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "hi", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-12T05:35:22Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-medium tags: - hf-asr-leaderboard - generated_from_trainer model-index: - name: Whisper Medium Hi CKS 1111 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Hi CKS 1111 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
1TuanPham/Instruct_en-vi_80k_b64_lr3e-4_lion_1TuanPham_bkai-vietnamese-llama2-7b-sharded_LORA_CAUSAL_LM
1TuanPham
2023-11-11T05:33:55Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:1TuanPham/bkai-vietnamese-llama2-7b-sharded", "base_model:adapter:1TuanPham/bkai-vietnamese-llama2-7b-sharded", "region:us" ]
null
2023-11-08T10:49:10Z
--- library_name: peft base_model: 1TuanPham/bkai-vietnamese-llama2-7b-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: True - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.1
dbandrews/mistral-v2-dpo-227c0f16-9588-4282-9bf9-6d057c254b0c
dbandrews
2023-11-11T05:18:15Z
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-11-11T05:08:54Z
--- library_name: peft --- ## Prompt Template: Same template was used in SFT, and DPO process: ``` ### Instruction: Use the article title and text below, to write the funniest possible comment about this article. ### Input: {" ".join(sample['title_article_text'].split(' ')[:300])} ### Response: ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions ```python transformers==4.35.0 peft==0.5.0 trl==0.7.2 ```
Remilistrasza/CounterfeitXL
Remilistrasza
2023-11-11T05:10:47Z
0
0
null
[ "region:us" ]
null
2023-11-11T03:06:49Z
Reference: https://civitai.com/models/118406/counterfeitxl?modelVersionId=146761
mwest23/pubmed_summarization
mwest23
2023-11-11T04:54:55Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:pubmed-summarization", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-11-10T18:24:30Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - pubmed-summarization model-index: - name: pubmed_summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pubmed_summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the pubmed-summarization dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 417 | 2.4062 | 0.137 | 0.0532 | 0.1153 | 0.1152 | 18.9946 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
srimathis/Taxi-Example
srimathis
2023-11-11T04:35:27Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-11-11T04:35:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-Example results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="srimathis/Taxi-Example", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jgarciaa15/clasificationfilms
jgarciaa15
2023-11-11T04:26:51Z
0
0
null
[ "art", "es", "arxiv:1910.09700", "region:us" ]
null
2023-11-11T04:08:48Z
--- language: - es tags: - art --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KalbeDigitalLab/alpara-7b-peft
KalbeDigitalLab
2023-11-11T04:14:37Z
15
0
peft
[ "peft", "safetensors", "text-generation-inference", "text-generation", "en", "base_model:yahma/llama-7b-hf", "base_model:adapter:yahma/llama-7b-hf", "region:us" ]
text-generation
2023-11-10T17:50:30Z
--- library_name: peft base_model: yahma/llama-7b-hf language: - en pipeline_tag: text-generation tags: - text-generation-inference --- # About : AlpaRA 7B, a model for medical dialogue understanding. Fine-tuned using the Alpaca configuration on a curated 5,000-instruction dataset capturing nuances in patient-doctor conversations. Use Parameter Efficient Fine Tuning (PEFT) and Low Rank Adaptation (LoRA), make this model efficient on consumer-grade GPUs. ## How to Use : ## Load the AlpaRA model ```python from peft import PeftModel from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig tokenizer = LlamaTokenizer.from_pretrained("yahma/llama-7b-hf") model = LlamaForCausalLM.from_pretrained( "yahma/llama-7b-hf", load_in_8bit=True, device_map="auto" ) model = PeftModel.from_pretrained(model, "KalbeDigitalLab/alpara-7b-peft") ``` ## Prompt Template : Feel free to change the instruction ```python PROMPT = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: "how to cure flu?" ### Response:""" ``` ## Evaluation ```python inputs = tokenizer( PROMPT, return_tensors="pt" ) input_ids = inputs["input_ids"].cuda() print("Generating...") generation_output = model.generate( input_ids=input_ids, return_dict_in_generate=True, output_scores=True, max_new_tokens=512, ) for s in generation_output.sequences: result = tokenizer.decode(s).split("### Response:")[1] print(result) ```
Prompt48/Llama-2-7b-chat-hf-fine-tuned-adapters-V1
Prompt48
2023-11-11T04:12:50Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-11-11T03:58:11Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2.dev0
rupeshs/LCM-dreamshaper-v7-openvino-int8
rupeshs
2023-11-11T03:45:36Z
0
4
null
[ "openvino ", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2023-11-08T15:56:54Z
--- license: mit language: - en tags: - 'openvino ' - text-to-image pipeline_tag: text-to-image --- ## Model Descriptions: This repo contains OpenVino model files for SimianLuo's LCM_Dreamshaper_v7 int8 quantized. This 8 bit model is **1.4x** faster than `float32` model. ## Generation Results: <p align="center"> <img src="teaser-int8.jpg"> </p> ## Usage You can try out model using [Fast SD CPU](https://github.com/rupeshs/fastsdcpu) To run the model yourself, you can leverage the 🧨 Diffusers library: 1. Install the dependencies: ``` pip install optimum-intel openvino diffusers onnx ``` 2. Run the model: ```py from optimum.intel import OVLatentConsistencyModelPipeline pipe = OVLatentConsistencyModelPipeline.from_pretrained( "rupeshs/LCM-dreamshaper-v7-openvino-int8", ov_config={"CACHE_DIR": ""}, ) prompt = "sailing ship in storm by Leonardo da Vinci" images = pipe( prompt=prompt, width=512, height=512, num_inference_steps=4, guidance_scale=8.0, ).images images[0].save("out_image.png") ```
Charles2023/cathead-2-1-20231111
Charles2023
2023-11-11T03:42:26Z
4
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-11T03:33:55Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### cathead-2-1-20231111 Dreambooth model trained by Charles2023 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
artyomboyko/dqn-SpaceInvadersNoFrameskip-v4-1
artyomboyko
2023-11-11T03:33:16Z
5
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-11T03:32:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 810.00 +/- 291.85 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga artyomboyko -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga artyomboyko -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga artyomboyko ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Rafaelrosendo1/my_models
Rafaelrosendo1
2023-11-11T03:23:32Z
3
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large", "base_model:finetune:openai/whisper-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-11-10T17:42:25Z
--- license: apache-2.0 base_model: openai/whisper-large tags: - generated_from_trainer model-index: - name: my_models results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_models This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
cuongptnk/ppo-Huggy
cuongptnk
2023-11-11T03:23:00Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-11-11T03:22:49Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: cuongptnk/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
SelimEmirCan/ddpm-celebahq-finetuned-butterflies-2epochs
SelimEmirCan
2023-11-11T03:10:16Z
1
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-11-11T03:10:02Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('SelimEmirCan/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
genies-models/openllama-3b-cooking
genies-models
2023-11-11T03:05:23Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T03:05:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-gender_bias
genies-models
2023-11-11T03:05:10Z
1
0
peft
[ "peft", "region:us" ]
null
2023-11-11T03:04:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-math_make_questions
genies-models
2023-11-11T03:04:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T03:03:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-ranking_logic_hard
genies-models
2023-11-11T03:03:54Z
1
0
peft
[ "peft", "region:us" ]
null
2023-11-11T03:03:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-math_hard
genies-models
2023-11-11T03:03:23Z
1
0
peft
[ "peft", "region:us" ]
null
2023-11-11T03:02:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
Aulo/Paulo
Aulo
2023-11-11T03:01:29Z
0
0
adapter-transformers
[ "adapter-transformers", "chemistry", "text-classification", "pt", "dataset:fka/awesome-chatgpt-prompts", "license:apache-2.0", "region:us" ]
text-classification
2023-11-11T02:57:30Z
--- license: apache-2.0 datasets: - fka/awesome-chatgpt-prompts language: - pt metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-classification tags: - chemistry ---
anirudhmu/swin-tiny-patch4-window7-224-finetuned-soccer-binary
anirudhmu
2023-11-11T03:00:24Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-11-11T02:37:02Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-soccer-binary results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9714285714285714 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-soccer-binary This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1138 - Accuracy: 0.9714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1286 | 0.96 | 12 | 0.1138 | 0.9714 | | 0.1267 | 2.0 | 25 | 0.1283 | 0.9657 | | 0.121 | 2.96 | 37 | 0.1124 | 0.9657 | | 0.1142 | 4.0 | 50 | 0.1151 | 0.9657 | | 0.1069 | 4.96 | 62 | 0.1063 | 0.96 | | 0.1038 | 6.0 | 75 | 0.1210 | 0.96 | | 0.0935 | 6.96 | 87 | 0.1150 | 0.96 | | 0.1042 | 8.0 | 100 | 0.1038 | 0.9657 | | 0.0945 | 8.96 | 112 | 0.1071 | 0.96 | | 0.0891 | 9.6 | 120 | 0.1077 | 0.96 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
genies-models/llama-7b-alpaca_hard
genies-models
2023-11-11T02:59:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:59:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-quote_attribution
genies-models
2023-11-11T02:58:34Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:58:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-punishment_avoidance
genies-models
2023-11-11T02:58:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:57:45Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-math_textbook
genies-models
2023-11-11T02:57:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:56:46Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-alpaca_short
genies-models
2023-11-11T02:56:33Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:55:36Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-cooking
genies-models
2023-11-11T02:55:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:55:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-shp_low_quality
genies-models
2023-11-11T02:54:06Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:53:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-us_history_textbook
genies-models
2023-11-11T02:53:36Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:53:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-cooking
genies-models
2023-11-11T02:51:51Z
1
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:50:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-commonsense_qa
genies-models
2023-11-11T02:49:47Z
5
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:49:31Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-alpaca_chat
genies-models
2023-11-11T02:49:30Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:49:04Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
yhwng/finetuning-sentiment-model-3000-samples
yhwng
2023-11-11T02:48:28Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-11T02:43:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8721311475409836 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3272 - Accuracy: 0.87 - F1: 0.8721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
genies-models/llama-7b-comma_separated_input
genies-models
2023-11-11T02:47:50Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:47:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-us_history_fiction
genies-models
2023-11-11T02:47:01Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:46:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/openllama-3b-us_history_textbook
genies-models
2023-11-11T02:46:38Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:46:26Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-crt_2
genies-models
2023-11-11T02:46:25Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:45:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-survival_influence
genies-models
2023-11-11T02:45:39Z
1
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:45:20Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-us_history_make_questions
genies-models
2023-11-11T02:45:20Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:44:41Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-alpaca_short
genies-models
2023-11-11T02:42:16Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:41:43Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/openllama-3b-math
genies-models
2023-11-11T02:41:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:41:17Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-30b-crt_2
genies-models
2023-11-11T02:40:48Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:39:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-13b-math_textbook
genies-models
2023-11-11T02:39:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:39:28Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-raven_easy
genies-models
2023-11-11T02:39:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:39:06Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-raven_matrices
genies-models
2023-11-11T02:39:06Z
8
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:38:49Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
genies-models/llama-7b-alpaca_high_quality
genies-models
2023-11-11T02:38:48Z
2
0
peft
[ "peft", "region:us" ]
null
2023-11-11T02:38:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0