modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 12:28:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 12:28:29
card
stringlengths
11
1.01M
ibivibiv/temp_tuned_mistral3
ibivibiv
2024-01-28T15:44:05Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T06:18:00Z
--- license: apache-2.0 language: - en library_name: transformers --- This is a fine tuned mistral uploaded for use in an moe merge. I'll add more info later, this is NOT from a contaminated data set. It is just a dataset from here on huggingface run on a mistral, nothing more.
scnuyjx/peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw-from-yjx
scnuyjx
2024-01-28T15:42:49Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:adapter:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "region:us" ]
null
2024-01-25T16:15:13Z
--- license: bigcode-openrail-m library_name: peft tags: - generated_from_trainer base_model: bigcode/starcoderbase-1b model-index: - name: peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw-from-yjx results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw-from-yjx This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4846 | 0.05 | 100 | 0.4749 | | 0.4216 | 0.1 | 200 | 0.4329 | | 0.39 | 0.15 | 300 | 0.4452 | | 0.3364 | 0.2 | 400 | 0.5184 | | 0.2917 | 0.25 | 500 | 0.5963 | | 0.2736 | 0.3 | 600 | 0.6457 | | 0.2636 | 0.35 | 700 | 0.6698 | | 0.2512 | 0.4 | 800 | 0.7002 | | 0.2384 | 0.45 | 900 | 0.7437 | | 0.2253 | 0.5 | 1000 | 0.7726 | | 0.2132 | 0.55 | 1100 | 0.8081 | | 0.2041 | 0.6 | 1200 | 0.8356 | | 0.197 | 0.65 | 1300 | 0.8593 | | 0.192 | 0.7 | 1400 | 0.8862 | | 0.187 | 0.75 | 1500 | 0.8862 | | 0.1829 | 0.8 | 1600 | 0.9074 | | 0.1817 | 0.85 | 1700 | 0.9207 | | 0.179 | 0.9 | 1800 | 0.9225 | | 0.1787 | 0.95 | 1900 | 0.9243 | | 0.1779 | 1.0 | 2000 | 0.9260 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ibivibiv/temp_tuned_mistral2
ibivibiv
2024-01-28T15:40:31Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T06:00:13Z
--- license: apache-2.0 language: - en library_name: transformers --- This is a fine tuned mistral uploaded for use in an moe merge. I'll add more info later, this is NOT from a contaminated data set. It is just a dataset from here on huggingface run on a mistral, nothing more.
t0r0id/mistral-7B-ft-prompt_prediction
t0r0id
2024-01-28T15:32:33Z
277
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-25T08:23:23Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-7B-ft-prompt_prediction results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7B-ft-prompt_prediction This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.531 | 0.6 | 5 | 1.4992 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
adalib/torchrec-data-gpt-neo-1.3B-prefix
adalib
2024-01-28T15:31:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "region:us" ]
null
2024-01-28T12:07:07Z
--- library_name: peft base_model: EleutherAI/gpt-neo-1.3B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T15:16:50Z
49
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "teknium/Mistral-Trismegistus-7B", "pytorch", "mistral-7b", "instruct", "finetune", "gpt4", "synthetic data", "distillation", "en", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T15:05:45Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - teknium/Mistral-Trismegistus-7B - pytorch - mistral-7b - instruct - finetune - gpt4 - synthetic data - distillation - en - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Amey91/test_1
Amey91
2024-01-28T15:14:10Z
175
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:facebook/m2m100_418M", "base_model:finetune:facebook/m2m100_418M", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-27T15:16:25Z
--- license: mit base_model: facebook/m2m100_418M tags: - generated_from_trainer model-index: - name: test_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_1 This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 10.1633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.2+cpu - Datasets 2.16.1 - Tokenizers 0.15.1
ZiHDeng/peft-lora-starcoder1B-Instruction-ny8
ZiHDeng
2024-01-28T15:13:15Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:bigcode/starcoderbase-1b", "base_model:adapter:bigcode/starcoderbase-1b", "license:bigcode-openrail-m", "region:us" ]
null
2024-01-24T09:08:12Z
--- license: bigcode-openrail-m library_name: peft tags: - generated_from_trainer base_model: bigcode/starcoderbase-1b model-index: - name: peft-lora-starcoder1B-Instruction-ny8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-lora-starcoder1B-Instruction-ny8 This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2429 | 0.05 | 100 | 0.2525 | | 0.2099 | 0.1 | 200 | 0.2812 | | 0.0957 | 0.15 | 300 | 0.4394 | | 0.0277 | 0.2 | 400 | 0.5758 | | 0.015 | 0.25 | 500 | 0.6307 | | 0.0144 | 0.3 | 600 | 0.6582 | | 0.0122 | 0.35 | 700 | 0.6811 | | 0.0105 | 0.4 | 800 | 0.6984 | | 0.0116 | 0.45 | 900 | 0.7030 | | 0.0101 | 0.5 | 1000 | 0.7078 | | 0.0097 | 0.55 | 1100 | 0.7047 | | 0.0091 | 0.6 | 1200 | 0.7144 | | 0.0087 | 0.65 | 1300 | 0.7196 | | 0.0075 | 0.7 | 1400 | 0.7318 | | 0.0082 | 0.75 | 1500 | 0.7242 | | 0.008 | 0.8 | 1600 | 0.7289 | | 0.0078 | 0.85 | 1700 | 0.7322 | | 0.0074 | 0.9 | 1800 | 0.7398 | | 0.0075 | 0.95 | 1900 | 0.7349 | | 0.0073 | 1.0 | 2000 | 0.7359 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
graizelle/pink-emo-rmx
graizelle
2024-01-28T15:12:03Z
18
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "safetensors", "template:sd-lora", "en", "base_model:stablediffusionapi/chilloutmixsf", "base_model:adapter:stablediffusionapi/chilloutmixsf", "license:openrail++", "region:us" ]
text-to-image
2024-01-20T19:45:08Z
--- library_name: diffusers license: openrail++ language: - en base_model: stablediffusionapi/chilloutmixsf tags: - text-to-image - stable-diffusion - lora - safetensors - diffusers - template:sd-lora inference: false widget: - text: >- 1girl, pink-emo, piercings, septum_ring, tattoos parameter: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry width=512, height=910, guidance_scale=4, num_inference_steps=40 example_title: 1girl output: - text: '1girl, pink-emo, piercings, septum_ring, tattoos' parameters: negative_prompt: worse quality output: url: images/pinkemo-babe.jpg - text: '1girl, pink-emo, piercings, septum_ring, tattoos' output: url: images/pnkemo4.jpeg - text: '1girl, pink-emo, piercings, septum_ring, tattoos' output: url: images/pnkemo5.jpeg - text: '1girl, pink-emo, piercings, septum_ring, tattoos' output: url: images/pinkemo2.jpeg - text: '1girl, pink-emo, piercings, septum_ring, tattoos' output: url: images/pinkemo3.jpeg - text: '1girl, pink-emo, piercings, septum_ring, tattoos' output: url: images/pinkemo-card.jpeg --- # Pink Emo Remix <Gallery /> ## Model description Remix of Pink Emo LoRA. Trained on 111 images of alt women. Punk, Emo, Goth, Alt, Tattoos, Piercings. ## Trigger words You should use `pink-emo` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/graizelle/pink-emo-rmx/tree/main) them in the Files & versions tab.
JKuang96/poca-SoccerTwos
JKuang96
2024-01-28T14:59:54Z
51
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2024-01-28T14:47:02Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: JKuang96/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
neovalle/H4rmoniousBreezeDPO
neovalle
2024-01-28T14:52:39Z
56
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:neovalle/H4rmony_dpo", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-30T16:58:29Z
--- tags: - text-generation license: mit datasets: - neovalle/H4rmony_dpo language: - en --- # Model Card for Model neovalle/H4rmoniousBreezeDPO ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aac16fd4a402e8dce11ebe/tD8ROHaejO5X3mza1HtcV.png) ## Model Details ### Model Description This is model is a version of HuggingFaceH4/zephyr-7b-beta fine-tuned via DPO, using the H4rmony_dpo dataset, which aims to better align the model with ecological values through the use of ecolinguistics principles. - **Developed by:** Jorge Vallego - **Funded by :** Neovalle Ltd. - **Shared by :** airesearch@neovalle.co.uk - **Model type:** mistral - **Language(s) (NLP):** Primarily English - **License:** MIT - **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta ## Uses Intended as PoC to show the effects of H4rmony_dpo dataset with DPO fine-tuning.. ### Direct Use For testing purposes to gain insight in order to help with the continous improvement of the H4rmony_dpo dataset. ### Downstream Use Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment) ### Out-of-Scope Use Not meant to be used other than testing and evaluation of the H4rmony dataset and ecological alignment. ## Bias, Risks, and Limitations This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning. ## How to Get Started with the Model It can be loaded and run in a Colab instance with High RAM. ## Training Details Trained using DPO ### Training Data H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony_dpo
chenhaodev/yi-34b-merge-slerp-v1
chenhaodev
2024-01-28T14:46:39Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "moe", "mergekit", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T06:42:00Z
--- license: apache-2.0 tags: - moe - mergekit language: - en metrics: - accuracy pipeline_tag: text-generation --- ## 🧩 Configuration ```yaml slices: - sources: - model: SUSTech/SUS-Chat-34B layer_range: [0, 60] - model: jondurbin/bagel-dpo-34b-v0.2 layer_range: [0, 60] merge_method: slerp base_model: jondurbin/bagel-dpo-34b-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "chenhugging/Yi-2x34B-Merge-Slerp" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Shaleen123/medical-yi-6b
Shaleen123
2024-01-28T14:45:21Z
61
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-01-28T14:42:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
badokorach/afriqa_afroxlmr_eng_280124
badokorach
2024-01-28T14:37:06Z
2
0
transformers
[ "transformers", "tf", "xlm-roberta", "question-answering", "generated_from_keras_callback", "base_model:badokorach/afriqa_afroxlmr_squad_v2_060124", "base_model:finetune:badokorach/afriqa_afroxlmr_squad_v2_060124", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-01-28T11:03:49Z
--- license: mit base_model: badokorach/afriqa_afroxlmr_squad_v2_060124 tags: - generated_from_keras_callback model-index: - name: badokorach/afriqa_afroxlmr_eng_280124 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # badokorach/afriqa_afroxlmr_eng_280124 This model is a fine-tuned version of [badokorach/afriqa_afroxlmr_squad_v2_060124](https://huggingface.co/badokorach/afriqa_afroxlmr_squad_v2_060124) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0014 - Validation Loss: 0.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 3040, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0394 | 0.0 | 0 | | 0.0230 | 0.0 | 1 | | 0.0260 | 0.0 | 2 | | 0.0250 | 0.0 | 3 | | 0.0337 | 0.0 | 4 | | 0.0621 | 0.0 | 5 | | 0.0089 | 0.0 | 6 | | 0.0061 | 0.0 | 7 | | 0.0032 | 0.0 | 8 | | 0.0046 | 0.0 | 9 | | 0.0044 | 0.0 | 10 | | 0.0048 | 0.0 | 11 | | 0.0007 | 0.0 | 12 | | 0.0031 | 0.0 | 13 | | 0.0008 | 0.0 | 14 | | 0.0049 | 0.0 | 15 | | 0.0023 | 0.0 | 16 | | 0.0006 | 0.0 | 17 | | 0.0017 | 0.0 | 18 | | 0.0014 | 0.0 | 19 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/HuginnV5.5-12.6B-8.0bpw-h8-exl2
LoneStriker
2024-01-28T14:32:24Z
6
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T14:26:55Z
--- license: cc-by-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303c6da4ec2dfa82a558005/keR4DZrn3tVVTMPxBrTyS.png) ### Huginn V5.5 Experimental frankenmerge using multiple 7B models using the Dare-ties method. Including: ### Part 1: * https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1 * https://huggingface.co/maywell/Synatra-7B-v0.3-RP ### Part 2: * https://huggingface.co/mlabonne/NeuralBeagle14-7B * https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2 ### Part 3: merged part 1 and part 2 together ### Part 4: then took the first 26 layers of https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2 and added them before the 32 layers of part 3 to make the final model ### Prompting and scope: seems to work well with alpaca for instructions, and chatML format for just normal conversation. scores like just under 73 points on the leaderboard, way higher than any huginn model before, by a factor of around 10 points. Huginn primarily excells at conversational tasks, and creative tasks, being capable at story writing, roleplaying, even helping writers with creative tasks, (Huginn is capable of coming up with creative ideas better than most other models)
netrunner-exe/Insight-Swap-models
netrunner-exe
2024-01-28T14:26:34Z
0
8
null
[ "onnx", "region:us" ]
null
2023-06-03T16:38:57Z
Hello, my name is Alex. You can find my GitHub profile [here](https://github.com/netrunner-exe). All models in this repository are intended for non-commercial use, academic research, and educational purposes only. By using this repository, you agree to take responsibility for not applying its content in unethical scenarios and to use it only in accordance with the laws of your country. The repository owner is absolved of any liability for potential legal or ethical violations on the part of the user. By using the content of this repository, you also agree to the terms of use and licensing agreements of the authors of the original models whose models are used in this repository. Thank you for your understanding and responsible use.
SteelStorage/VerA-Etheria-55b
SteelStorage
2024-01-28T14:24:33Z
8
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "Etheria", "base_model:brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "base_model:finetune:brucethemoose/Yi-34B-200K-DARE-megamerge-v8", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-25T10:27:53Z
--- tags: - merge - mergekit - Etheria base_model: - brucethemoose/Yi-34B-200K-DARE-megamerge-v8 license: apache-2.0 --- # VerA-Etheria-55b ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/UrQv8fprq0VAjWcH5tx16.png) An attempt to make a functional goliath style merge with One yi-34b-200k model Merged to make a [Etheria] 55b-200k Model, this is Version A or VerA, it is a single Model Passthrough merge. # Roadmap: Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge. 1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge. 2: If the single model performs well I will generate a 55b of the most performant model then either Slerp or Dare ties merge. 3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models. ## 🧩 Configuration ```yaml dtype: bfloat16 slices: - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [0, 14] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [7, 21] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [15, 29] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [22, 36] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [30, 44] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [37, 51] - sources: - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8 layer_range: [45, 59] merge_method: passthrough ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "steelskull/VA-Etheria-55b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
suhas-kr/ppo-LunarLander-v2
suhas-kr
2024-01-28T14:23:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-28T14:23:09Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.61 +/- 18.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LoneStriker/HuginnV5.5-12.6B-5.0bpw-h6-exl2
LoneStriker
2024-01-28T14:22:46Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T14:19:06Z
--- license: cc-by-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303c6da4ec2dfa82a558005/keR4DZrn3tVVTMPxBrTyS.png) ### Huginn V5.5 Experimental frankenmerge using multiple 7B models using the Dare-ties method. Including: ### Part 1: * https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1 * https://huggingface.co/maywell/Synatra-7B-v0.3-RP ### Part 2: * https://huggingface.co/mlabonne/NeuralBeagle14-7B * https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2 ### Part 3: merged part 1 and part 2 together ### Part 4: then took the first 26 layers of https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2 and added them before the 32 layers of part 3 to make the final model ### Prompting and scope: seems to work well with alpaca for instructions, and chatML format for just normal conversation. scores like just under 73 points on the leaderboard, way higher than any huginn model before, by a factor of around 10 points. Huginn primarily excells at conversational tasks, and creative tasks, being capable at story writing, roleplaying, even helping writers with creative tasks, (Huginn is capable of coming up with creative ideas better than most other models)
SteelStorage/Aurora-10.7B
SteelStorage
2024-01-28T14:21:59Z
6
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Aurora", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-21T12:00:56Z
--- license: apache-2.0 tags: - Aurora --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/CHx9TxNMX79pEm2WO2jXg.png) # Aurora-10.7b_Base Aurora-10.7b_Base is a merge of the following models: to create a 10.7b base model that can be trained. * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ## Merged Evals: (Has Not Been Finetuned) Aurora-10.7b_Base * Avg: 63.98 * ARC: 62.88 * HellaSwag: 83.99 * MMLU: 60.24 * T-QA: 67.84 * Winogrande: 76.4 * GSM8K: 32.52 ## (OG)Donated Evals: Mistral-7b-v0.2 * Avg: 65.71 * ARC: 63.14 * HellaSwag: 84.88 * MMLU: 60.78 * T-QA: 68.26 * Winogrande: 77.19 * GSM8K: 40.03 ## 🧩 Configuration ``` slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Steelskull/Aurora_base_test" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
LoneStriker/HuginnV5.5-12.6B-3.0bpw-h6-exl2
LoneStriker
2024-01-28T14:16:03Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T14:13:43Z
--- license: cc-by-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303c6da4ec2dfa82a558005/keR4DZrn3tVVTMPxBrTyS.png) ### Huginn V5.5 Experimental frankenmerge using multiple 7B models using the Dare-ties method. Including: ### Part 1: * https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1 * https://huggingface.co/maywell/Synatra-7B-v0.3-RP ### Part 2: * https://huggingface.co/mlabonne/NeuralBeagle14-7B * https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2 ### Part 3: merged part 1 and part 2 together ### Part 4: then took the first 26 layers of https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2 and added them before the 32 layers of part 3 to make the final model ### Prompting and scope: seems to work well with alpaca for instructions, and chatML format for just normal conversation. scores like just under 73 points on the leaderboard, way higher than any huginn model before, by a factor of around 10 points. Huginn primarily excells at conversational tasks, and creative tasks, being capable at story writing, roleplaying, even helping writers with creative tasks, (Huginn is capable of coming up with creative ideas better than most other models)
dvilasuero/CapMistral-7B-Instruct
dvilasuero
2024-01-28T14:14:43Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T14:11:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RaniAimlTest/multi-user-chat-openchat-3.5-0106-completions-only
RaniAimlTest
2024-01-28T14:08:56Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openchat/openchat-3.5-0106", "base_model:adapter:openchat/openchat-3.5-0106", "region:us" ]
null
2024-01-28T13:11:29Z
--- library_name: peft base_model: openchat/openchat-3.5-0106 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
gardner/TinyLlama-1.1B-SlimOrca
gardner
2024-01-28T14:08:36Z
4
0
peft
[ "peft", "pytorch", "safetensors", "llama", "axolotl", "generated_from_trainer", "en", "dataset:Open-Orca/SlimOrca-Dedup", "base_model:gardner/TinyLlama-1.1B-Instruct-3T", "base_model:adapter:gardner/TinyLlama-1.1B-Instruct-3T", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2024-01-28T10:40:19Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: gardner/TinyLlama-1.1B-Instruct-3T model-index: - name: TinyLlama-1.1B-SlimOrca results: [] datasets: - Open-Orca/SlimOrca-Dedup language: - en --- # TinyLlama-1.1B-SlimOrca This model is a fine-tuned version of [gardner/TinyLlama-1.1B-Instruct-3T](https://huggingface.co/gardner/TinyLlama-1.1B-Instruct-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5636 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/638581711769b7c4b10f0523/OSBJe4jBWYOnWWTpUpaF_.jpeg) [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: gardner/TinyLlama-1.1B-Instruct-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true load_in_8bit: true load_in_4bit: false strict: false datasets: - path: Open-Orca/SlimOrca-Dedup type: sharegpt split: train dataset_prepared_path: ./dsprepare/Open-Orca/SlimOrca-Dedup val_set_size: 0.05 output_dir: ./tinyllama-1.1b-slimorca hub_model_id: gardner/TinyLlama-1.1B-SlimOrca sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: tinyllama wandb_entity: gardner wandb_name: tinyllama-slimorca gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2902 | 0.0 | 1 | 0.9116 | | 1.0653 | 0.25 | 1126 | 0.6458 | | 1.0279 | 0.5 | 2252 | 0.6187 | | 0.8918 | 0.75 | 3378 | 0.6042 | | 0.9362 | 1.0 | 4504 | 0.5924 | | 0.8138 | 1.23 | 5630 | 0.5863 | | 0.9669 | 1.48 | 6756 | 0.5814 | | 1.019 | 1.73 | 7882 | 0.5742 | | 0.9232 | 1.98 | 9008 | 0.5695 | | 0.8507 | 2.22 | 10134 | 0.5700 | | 0.7542 | 2.47 | 11260 | 0.5662 | | 0.8325 | 2.72 | 12386 | 0.5639 | | 0.7913 | 2.97 | 13512 | 0.5617 | | 0.8372 | 3.2 | 14638 | 0.5648 | | 0.8984 | 3.45 | 15764 | 0.5638 | | 0.7898 | 3.7 | 16890 | 0.5636 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
suncy13/foot-finetune-28-jan
suncy13
2024-01-28T14:03:04Z
174
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-01-28T14:01:19Z
--- license: other base_model: nvidia/mit-b0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: foot-finetune-28-jan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # foot-finetune-28-jan This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the suncy13/FootImg dataset. It achieves the following results on the evaluation set: - Loss: 0.1107 - Mean Iou: 0.0 - Mean Accuracy: nan - Overall Accuracy: nan - Accuracy Foot: nan - Iou Foot: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Foot | Iou Foot | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:--------:| | 0.356 | 2.0 | 20 | 0.5295 | 0.0 | nan | nan | nan | 0.0 | | 0.2927 | 4.0 | 40 | 0.3244 | 0.0 | nan | nan | nan | 0.0 | | 0.2511 | 6.0 | 60 | 0.2386 | 0.0 | nan | nan | nan | 0.0 | | 0.2458 | 8.0 | 80 | 0.2305 | 0.0 | nan | nan | nan | 0.0 | | 0.2152 | 10.0 | 100 | 0.2065 | 0.0 | nan | nan | nan | 0.0 | | 0.1996 | 12.0 | 120 | 0.1905 | 0.0 | nan | nan | nan | 0.0 | | 0.1878 | 14.0 | 140 | 0.1823 | 0.0 | nan | nan | nan | 0.0 | | 0.1902 | 16.0 | 160 | 0.1743 | 0.0 | nan | nan | nan | 0.0 | | 0.1646 | 18.0 | 180 | 0.1572 | 0.0 | nan | nan | nan | 0.0 | | 0.1512 | 20.0 | 200 | 0.1552 | 0.0 | nan | nan | nan | 0.0 | | 0.1438 | 22.0 | 220 | 0.1415 | 0.0 | nan | nan | nan | 0.0 | | 0.1355 | 24.0 | 240 | 0.1424 | 0.0 | nan | nan | nan | 0.0 | | 0.1342 | 26.0 | 260 | 0.1322 | 0.0 | nan | nan | nan | 0.0 | | 0.1355 | 28.0 | 280 | 0.1307 | 0.0 | nan | nan | nan | 0.0 | | 0.1198 | 30.0 | 300 | 0.1238 | 0.0 | nan | nan | nan | 0.0 | | 0.1179 | 32.0 | 320 | 0.1229 | 0.0 | nan | nan | nan | 0.0 | | 0.1108 | 34.0 | 340 | 0.1196 | 0.0 | nan | nan | nan | 0.0 | | 0.1145 | 36.0 | 360 | 0.1182 | 0.0 | nan | nan | nan | 0.0 | | 0.1097 | 38.0 | 380 | 0.1168 | 0.0 | nan | nan | nan | 0.0 | | 0.1199 | 40.0 | 400 | 0.1164 | 0.0 | nan | nan | nan | 0.0 | | 0.1185 | 42.0 | 420 | 0.1138 | 0.0 | nan | nan | nan | 0.0 | | 0.1026 | 44.0 | 440 | 0.1115 | 0.0 | nan | nan | nan | 0.0 | | 0.1039 | 46.0 | 460 | 0.1100 | 0.0 | nan | nan | nan | 0.0 | | 0.1091 | 48.0 | 480 | 0.1107 | 0.0 | nan | nan | nan | 0.0 | | 0.1074 | 50.0 | 500 | 0.1107 | 0.0 | nan | nan | nan | 0.0 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T14:00:58Z
49
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "migtissera/SynthIA-7B-v1.5", "pytorch", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T13:45:01Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - migtissera/SynthIA-7B-v1.5 - pytorch - en - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
Bharkavi16/blue-parrot
Bharkavi16
2024-01-28T13:58:20Z
0
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-28T13:54:29Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Blue-Parrot Dreambooth model trained by Bharkavi16 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gASPMB Sample pictures of this concept: ![0](https://huggingface.co/Bharkavi16/blue-parrot/resolve/main/sample_images/rav_(2).jpeg) ![1](https://huggingface.co/Bharkavi16/blue-parrot/resolve/main/sample_images/rav_(4).jpeg) ![2](https://huggingface.co/Bharkavi16/blue-parrot/resolve/main/sample_images/rav_(3).jpeg)
huggingfaceprofile123/my-pet-dog
huggingfaceprofile123
2024-01-28T13:52:56Z
0
0
null
[ "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-01-28T13:50:49Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by huggingfaceprofile123 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/huggingfaceprofile123/my-pet-dog/resolve/main/sample_images/download_(2).jpg) ![1](https://huggingface.co/huggingfaceprofile123/my-pet-dog/resolve/main/sample_images/download_(3).jpg) ![2](https://huggingface.co/huggingfaceprofile123/my-pet-dog/resolve/main/sample_images/download.jpg) ![3](https://huggingface.co/huggingfaceprofile123/my-pet-dog/resolve/main/sample_images/download_(4).jpg) ![4](https://huggingface.co/huggingfaceprofile123/my-pet-dog/resolve/main/sample_images/download_(1).jpg)
tempdeltavalue/ppo-LunarLander-v2
tempdeltavalue
2024-01-28T13:48:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-28T13:48:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.43 +/- 21.29 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dudikoff/seiko
dudikoff
2024-01-28T13:40:48Z
3
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-28T13:37:16Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### seiko Dreambooth model trained by dudikoff with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
adalib/colossalai-data-gpt-neo-1.3B-prefix
adalib
2024-01-28T13:37:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "region:us" ]
null
2024-01-28T13:37:14Z
--- library_name: peft base_model: EleutherAI/gpt-neo-1.3B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T13:36:19Z
129
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "maywell/Mini_Synatra_SFT", "pytorch", "ko", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us", "license:apache-2.0", "base_model:MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T13:25:30Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - maywell/Mini_Synatra_SFT - pytorch - ko - license:cc-by-sa-4.0 - autotrain_compatible - endpoints_compatible - region:us - license:apache-2.0 model_name: Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mini_Synatra_SFT-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
ntc-ai/SDXL-LoRA-slider.playing-a-musical-instrument
ntc-ai
2024-01-28T13:30:18Z
41
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-28T13:30:14Z
--- language: - en thumbnail: "images/evaluate/playing a musical instrument.../playing a musical instrument_17_3.0.png" widget: - text: playing a musical instrument output: url: images/playing a musical instrument_17_3.0.png - text: playing a musical instrument output: url: images/playing a musical instrument_19_3.0.png - text: playing a musical instrument output: url: images/playing a musical instrument_20_3.0.png - text: playing a musical instrument output: url: images/playing a musical instrument_21_3.0.png - text: playing a musical instrument output: url: images/playing a musical instrument_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "playing a musical instrument" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - playing a musical instrument (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/playing a musical instrument_17_-3.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_17_0.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_17_3.0.png" width=256 height=256 /> | | <img src="images/playing a musical instrument_19_-3.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_19_0.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_19_3.0.png" width=256 height=256 /> | | <img src="images/playing a musical instrument_20_-3.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_20_0.0.png" width=256 height=256 /> | <img src="images/playing a musical instrument_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` playing a musical instrument ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.playing-a-musical-instrument', weight_name='playing a musical instrument.safetensors', adapter_name="playing a musical instrument") # Activate the LoRA pipe.set_adapters(["playing a musical instrument"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, playing a musical instrument" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
harveymannering/deepseek-coder-6.7b-instruct-finetuned-manimation-v2
harveymannering
2024-01-28T13:22:07Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T12:33:42Z
--- license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - generated_from_trainer model-index: - name: deepseek-coder-6.7b-instruct-finetuned-manimation-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deepseek-coder-6.7b-instruct-finetuned-manimation-v2 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 32 | 0.4282 | | No log | 2.0 | 64 | 0.3936 | | No log | 3.0 | 96 | 0.3792 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
marianna13/openhermes-7b-llava-instruct-665k
marianna13
2024-01-28T13:20:24Z
6
0
transformers
[ "transformers", "safetensors", "bakllava", "text-generation", "en", "dataset:liuhaotian/LLaVA-Instruct-150K", "dataset:liuhaotian/LLaVA-Pretrain", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-26T09:51:25Z
--- library_name: transformers license: apache-2.0 datasets: - liuhaotian/LLaVA-Instruct-150K - liuhaotian/LLaVA-Pretrain language: - en --- # Model Card for OpenHermes-7B-llava-instruct <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [LAION](https://laion.ai/), [SkunkworksAI](https://huggingface.co/SkunkworksAI) - **Model type:** LLaVA is an open-source chatbot trained by fine-tuning OpenHermes on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture - **Finetuned from model:** [OpenHermes-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) - **Finetuned from model:** Apache-2.0 ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [BakLLaVa](https://github.com/SkunkworksAI/BakLLaVA) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> | model | SQA | POPE | GQA | | --- | --- | --- | --- | | llava-1.5-7b | 67.97% | 85.30% | 61.96% | | openhermes-7b-llava-instruct-665k | 59.64% | 84.60% | 42.39% |
adalib/sqlmodel-data-gpt-neo-1.3B-prefix
adalib
2024-01-28T13:07:32Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-1.3B", "base_model:adapter:EleutherAI/gpt-neo-1.3B", "region:us" ]
null
2024-01-28T13:07:27Z
--- library_name: peft base_model: EleutherAI/gpt-neo-1.3B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Chinxian1121/llama-2-7b-chat-chinxian
Chinxian1121
2024-01-28T13:04:30Z
0
0
peft
[ "peft", "region:us" ]
null
2023-11-29T17:15:30Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
zhanjun520/ppo-LunarLander-v2
zhanjun520
2024-01-28T13:03:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-28T12:59:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.37 +/- 16.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RaniAimlTest/multi-user-chat-openchat-3.5-0106-full-conversations
RaniAimlTest
2024-01-28T12:56:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openchat/openchat-3.5-0106", "base_model:adapter:openchat/openchat-3.5-0106", "region:us" ]
null
2024-01-28T12:56:01Z
--- library_name: peft base_model: openchat/openchat-3.5-0106 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
DHEIVER/FractureVision
DHEIVER
2024-01-28T12:49:54Z
2
0
transformers
[ "transformers", "object_detection", "endpoints_compatible", "region:us" ]
null
2024-01-28T12:42:39Z
# Cartão de Modelo de Detecção de Objetos YOLOv8 ## Visão Geral Este modelo é baseado no YOLOv8, um algoritmo de detecção de objetos de última geração que utiliza técnicas de aprendizado profundo para detectar objetos em imagens. O modelo foi treinado em um conjunto de dados diversificado e é capaz de detectar uma ampla gama de objetos com alta precisão. ## Uso Previsto Este modelo destina-se a ser utilizado para tarefas de detecção de objetos em imagens. Pode ser utilizado em várias aplicações, incluindo, mas não se limitando a: - Sistemas de direção autônoma - Sistemas de vigilância e segurança - Automação industrial - Robótica - Realidade aumentada ## Limitações e Viéses Embora este modelo tenha bom desempenho em muitos cenários, pode encontrar limitações e viéses em determinadas situações. Algumas limitações e viéses potenciais incluem: - O desempenho pode degradar em imagens com condições de iluminação inadequadas ou oclusões pesadas. - O modelo pode ter dificuldade em detectar objetos significativamente diferentes daqueles nos dados de treinamento. - Como todos os modelos de visão computacional, este modelo pode exibir viéses presentes nos dados de treinamento, como sobre-representação ou sub-representação de certos grupos demográficos. ## Métricas de Avaliação O desempenho deste modelo pode ser avaliado usando métricas padrão de detecção de objetos, incluindo: - Precisão Média (AP) - Precisão Média da Precisão (mAP) - Curvas de Precisão-Revocação ## Considerações Éticas Ao implantar este modelo, é essencial considerar as implicações éticas e as consequências potenciais. Algumas considerações incluem: - Preocupações com privacidade: Garanta que o modelo não seja usado para vigilância invasiva ou infringir os direitos de privacidade dos indivíduos. - Justiça: Tome medidas para mitigar viéses nos dados de treinamento e avalie o desempenho do modelo em diferentes demografias. - Segurança: Implemente salvaguardas para evitar que o modelo tome decisões prejudiciais, especialmente em aplicações críticas de segurança, como veículos autônomos. ## Desempenho do Modelo Para métricas de desempenho detalhadas e benchmarks, consulte a documentação associada e os resultados de avaliação. ## Autores - [Seu Nome ou Organização] ## Licença Este modelo é fornecido sob a [licença](). Consulte o arquivo de licença acompanhante para obter detalhes. ## Contato Para perguntas ou feedback sobre este modelo, entre em contato com [email@example.com](mailto:email@example.com).
LoneStriker/Midnight-Rose-70B-v1.0-6.0bpw-h6-exl2
LoneStriker
2024-01-28T12:48:09Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T12:25:56Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```
Lvxy1117/amber_fine_tune_001
Lvxy1117
2024-01-28T12:45:36Z
47
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T06:51:17Z
--- license: apache-2.0 language: - en datasets: - WizardLM/WizardLM_evol_instruct_V2_196k --- # Model Card for Lvxy1117/amber_fine_tune_001 <!-- Provide a quick summary of what the model is/does. --> It is a test fine_tune model from LLM360/amber. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ConnyGenz/artificially-natural-roberta-02
ConnyGenz
2024-01-28T12:36:27Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:ConnyGenz/artificially-natural-roberta-01", "base_model:finetune:ConnyGenz/artificially-natural-roberta-01", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-28T12:13:41Z
--- license: mit base_model: ConnyGenz/artificially-natural-roberta-01 tags: - generated_from_trainer metrics: - f1 model-index: - name: artificially-natural-roberta-02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # artificially-natural-roberta-02 This model is a fine-tuned version of [ConnyGenz/artificially-natural-roberta-01](https://huggingface.co/ConnyGenz/artificially-natural-roberta-01) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0516 - F1: 0.993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:-----:| | No log | 1.0 | 250 | 0.0546 | 0.989 | | 0.0227 | 2.0 | 500 | 0.0490 | 0.992 | | 0.0227 | 3.0 | 750 | 0.0516 | 0.993 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
adalib/torchdata-data-gpt-neo-2.7B-prefix
adalib
2024-01-28T12:34:37Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-2.7B", "base_model:adapter:EleutherAI/gpt-neo-2.7B", "region:us" ]
null
2024-01-28T12:34:32Z
--- library_name: peft base_model: EleutherAI/gpt-neo-2.7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T12:33:52Z
37
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "cognitivecomputations/samantha-1.2-mistral-7b", "pytorch", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T12:23:15Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - cognitivecomputations/samantha-1.2-mistral-7b - pytorch - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
jungyuko/DAVinCI-Yi-Ko-6B-v0.71
jungyuko
2024-01-28T12:31:47Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:42:15Z
--- license: cc-by-nc-4.0 --- ## DAVinCI-Yi-Ko-6B-v0.71 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.71
jungyuko
2024-01-28T12:26:32Z
138
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:40:57Z
--- license: cc-by-nc-4.0 --- ## DAVinCI-42dot_LLM-PLM-1.3B-v0.71 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hyperparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 24 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 4 * total_train_batch_size: 96 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
adalib/colossalai-data-gpt-neo-125m-prefix
adalib
2024-01-28T12:24:30Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "region:us" ]
null
2024-01-28T12:24:22Z
--- library_name: peft base_model: EleutherAI/gpt-neo-125m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
adalib/torchrec-data-gpt-neo-2.7B-prefix
adalib
2024-01-28T12:18:14Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-2.7B", "base_model:adapter:EleutherAI/gpt-neo-2.7B", "region:us" ]
null
2024-01-28T12:18:10Z
--- library_name: peft base_model: EleutherAI/gpt-neo-2.7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T12:16:00Z
42
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "Dans-DiscountModels/Mistral-7b-FFT-Test3", "pytorch", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T12:05:20Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - Dans-DiscountModels/Mistral-7b-FFT-Test3 - pytorch - generated_from_trainer - base_model:mistralai/Mistral-7B-v0.1 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mistral-7b-FFT-Test3-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
adalib/sqlmodel-data-gpt-neo-125m-prefix
adalib
2024-01-28T12:15:36Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "region:us" ]
null
2024-01-28T12:15:29Z
--- library_name: peft base_model: EleutherAI/gpt-neo-125m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
heavytail/kullm-solar
heavytail
2024-01-28T12:12:14Z
2,269
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T09:21:53Z
--- license: apache-2.0 language: - ko --- # KULLM project - base model: Upstage/SOLAR-10.7B-Instruct-v1.0 ## datasets - KULLM dataset - hand-crafted instruction data ## Implementation Code ```python from transformers import ( AutoModelForCausalLM, AutoTokenizer ) import torch repo = "heavytail/kullm-solar" model = AutoModelForCausalLM.from_pretrained( repo, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ``` Initial upload: 2024/01/28 21:10
daniel123321/whisper-small-eng
daniel123321
2024-01-28T12:10:51Z
66
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-27T09:28:55Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-eng results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-eng This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5746 - Wer: 24.4747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.7025 | 0.03 | 100 | 0.6855 | 36.9988 | | 0.7478 | 0.07 | 200 | 0.8034 | 35.4196 | | 0.7516 | 0.1 | 300 | 0.7854 | 31.8551 | | 0.7175 | 0.13 | 400 | 0.7868 | 32.9444 | | 0.6748 | 0.17 | 500 | 0.7239 | 31.1203 | | 0.6739 | 0.2 | 600 | 0.7045 | 29.7473 | | 0.6262 | 0.24 | 700 | 0.6620 | 27.1239 | | 0.585 | 0.27 | 800 | 0.6254 | 26.6147 | | 0.5305 | 0.3 | 900 | 0.5877 | 24.6552 | | 0.5463 | 0.34 | 1000 | 0.5746 | 24.4747 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Midnight-Rose-70B-v1.0-5.0bpw-h6-exl2
LoneStriker
2024-01-28T12:08:55Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:49:53Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```
adalib/torchdata-data-gpt-neo-125m-prefix
adalib
2024-01-28T12:01:04Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "region:us" ]
null
2024-01-28T12:01:01Z
--- library_name: peft base_model: EleutherAI/gpt-neo-125m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
KaushalB/ppo-LunarLander-v2
KaushalB
2024-01-28T12:00:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-28T11:59:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -956.62 +/- 450.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
adalib/torchrec-data-gpt-neo-125m-prefix
adalib
2024-01-28T11:58:04Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "region:us" ]
null
2024-01-28T11:58:01Z
--- library_name: peft base_model: EleutherAI/gpt-neo-125m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
harveymannering/deepseek-coder-6.7b-instruct-finetuned-manimation
harveymannering
2024-01-28T11:53:24Z
59
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T20:17:03Z
--- license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - generated_from_trainer model-index: - name: deepseek-coder-6.7b-instruct-finetuned-manimation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deepseek-coder-6.7b-instruct-finetuned-manimation This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9542 | 1.0 | 682 | 0.8080 | | 0.8056 | 2.0 | 1364 | 0.7623 | | 0.7575 | 3.0 | 2046 | 0.7531 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
yleo/monacan-translator-mistral
yleo
2024-01-28T11:51:39Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-28T00:08:10Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-v0.1 model-index: - name: monacan-translator-mistral results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # monacan-translator-mistral This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF
MaziyarPanahi
2024-01-28T11:51:14Z
63
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "Safetensors", "text-generation-inference", "merge", "7b", "mistralai/Mistral-7B-Instruct-v0.1", "lgaalves/mistral-7b_open_platypus", "pytorch", "en", "dataset:garage-bAInd/Open-Platypus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "base_model:MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1", "conversational" ]
text-generation
2024-01-28T11:36:23Z
--- license: apache-2.0 tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - Safetensors - text-generation-inference - merge - 7b - mistralai/Mistral-7B-Instruct-v0.1 - lgaalves/mistral-7b_open_platypus - pytorch - en - dataset:garage-bAInd/Open-Platypus - license:apache-2.0 - autotrain_compatible - endpoints_compatible - region:us model_name: mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1) ## Description [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mistral-7b_open_platypus-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
LoneStriker/Midnight-Rose-70B-v1.0-4.65bpw-h6-exl2
LoneStriker
2024-01-28T11:49:51Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:32:26Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```
AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml
AlekseyKorshuk
2024-01-28T11:47:29Z
6
0
transformers
[ "transformers", "pytorch", "safetensors", "phi", "text-generation", "axolotl", "generated_from_trainer", "conversational", "custom_code", "base_model:AlekseyKorshuk/ultrachat-phi-2-sft-chatml", "base_model:finetune:AlekseyKorshuk/ultrachat-phi-2-sft-chatml", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T09:54:14Z
--- license: mit base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml tags: - axolotl - generated_from_trainer model-index: - name: ultrachat-evolcode-phi-2-sft-chatml results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: AlekseyKorshuk/ultrachat-phi-2-sft-chatml model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true hub_model_id: AlekseyKorshuk/ultrachat-evolcode-phi-2-sft-chatml hub_strategy: every_save load_in_8bit: false load_in_4bit: false strict: false datasets: - path: AlekseyKorshuk/evol-codealpaca-v1-sft type: sharegpt conversation: chatml dataset_prepared_path: val_set_size: 0 output_dir: ./output sequence_len: 2048 sample_packing: false pad_to_sequence_len: lora_r: lora_alpha: lora_dropout: lora_target_modules: lora_target_linear: lora_fan_in_fan_out: wandb_project: ui-thesis wandb_entity: wandb_watch: wandb_name: ultrachat-evolcode-phi-2-sft-chatml wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 16 num_epochs: 1 optimizer: paged_adamw_8bit adam_beta1: 0.9 adam_beta2: 0.95 max_grad_norm: 1.0 adam_epsilon: 0.00001 lr_scheduler: cosine cosine_min_lr_ratio: 0.1 learning_rate: 2e-5 warmup_ratio: 0.1 weight_decay: 0.1 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: true #bf16: false #fp16: false #tf32: false #float16: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true evals_per_epoch: 0 eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0 eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128 eval_sample_packing: false chat_template: chatml saves_per_epoch: 5 save_total_limit: 1 seed: 42 debug: deepspeed: fsdp: fsdp_config: resize_token_embeddings_to_32x: true ``` </details><br> # ultrachat-evolcode-phi-2-sft-chatml This model is a fine-tuned version of [AlekseyKorshuk/ultrachat-phi-2-sft-chatml](https://huggingface.co/AlekseyKorshuk/ultrachat-phi-2-sft-chatml) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 7 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
ceardai/neural_beagle
ceardai
2024-01-28T11:41:23Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "dpo", "rlhf", "conversational", "base_model:mlabonne/Beagle14-7B", "base_model:finetune:mlabonne/Beagle14-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:41:19Z
--- license: cc-by-nc-4.0 base_model: mlabonne/Beagle14-7B tags: - merge - mergekit - lazymergekit - dpo - rlhf --- ![](https://i.imgur.com/89ZAKcn.png) # 🐶 NeuralBeagle14-7B **Update 01/16/24: NeuralBeagle14-7B is (probably) the best 7B model you can find! 🎉** NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac). It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1), based on jondurbin's [repo](https://github.com/jondurbin/bagel) and [jondurbin/bagel-v0.3](https://huggingface.co/datasets/jondurbin/bagel-v0.3]) * [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp), based on [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). 💪 You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralBeagle14-7B-GGUF-Chat) (GGUF Q4_K_M). ## 🔍 Applications This model uses a context window of 8k. It is compatible with different templates, like chatml and Llama's chat template. Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling. ## ⚡ Quantized models * **GGUF**: https://huggingface.co/mlabonne/NeuralBeagle14-7B-GGUF * **GPTQ**: https://huggingface.co/TheBloke/NeuralBeagle14-7B-GPTQ * **AWQ**: https://huggingface.co/TheBloke/NeuralBeagle14-7B-AWQ * **EXL2**: https://huggingface.co/LoneStriker/NeuralBeagle14-7B-8.0bpw-h8-exl2 ## 🏆 Evaluation ### Open LLM Leaderboard NeuralBeagle14-7B ranks first on the Open LLM Leaderboard in the ~7B category. ![](https://i.imgur.com/4nAzJsr.png) It has the same average score as Beagle14-7B ("Show merges"), which could be due to might be due to an unlucky run. I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point, since this dataset or its original version are present in multiple models. I need to find more high-quality preference data for the next DPO merge. Note that some models like udkai/Turdus and nfaheem/Marcoroni-7b-DPO-Merge are unfortunately contaminated on purpose (see the very high Winogrande score). ### Nous The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date. | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [📄](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | **60.25** | **46.06** | **76.77** | **70.32** | **47.86** | | [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [📄](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 | | [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 | | [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [📄](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 | | [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [📄](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 | | [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [📄](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/NeuralBeagle14-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` <p align="center"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p>
heavytail/kullm-mistral
heavytail
2024-01-28T11:40:06Z
2,215
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T09:03:43Z
--- license: apache-2.0 language: - ko --- # KULLM project - base model: mistralai/Mistral-7B-Instruct-v0.2 ## datasets - KULLM dataset - hand-crafted instruction data ## Implementation Code ```python from transformers import ( AutoModelForCausalLM, AutoTokenizer ) import torch repo = "heavytail/kullm-mistral" model = AutoModelForCausalLM.from_pretrained( repo, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ``` Initial upload: 2024/01/28 20:30
LoneStriker/Midnight-Rose-70B-v1.0-4.0bpw-h6-exl2
LoneStriker
2024-01-28T11:32:24Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:16:11Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```
alnrg2arg/test3_sft_4bit2
alnrg2arg
2024-01-28T11:21:33Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4", "base_model:finetune:alnrg2arg/blockchainlabs_7B_merged_test2_4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:15:06Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4 --- # Uploaded model - **Developed by:** alnrg2arg - **License:** apache-2.0 - **Finetuned from model :** alnrg2arg/blockchainlabs_7B_merged_test2_4 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mango278/distilbert-base-uncased-lora-text-classification
mango278
2024-01-28T11:21:30Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-01-28T11:21:24Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: distilbert-base-uncased model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8460 - Accuracy: {'accuracy': 0.897} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3866 | {'accuracy': 0.88} | | 0.4059 | 2.0 | 500 | 0.4802 | {'accuracy': 0.882} | | 0.4059 | 3.0 | 750 | 0.5185 | {'accuracy': 0.883} | | 0.2343 | 4.0 | 1000 | 0.5356 | {'accuracy': 0.884} | | 0.2343 | 5.0 | 1250 | 0.6939 | {'accuracy': 0.891} | | 0.0849 | 6.0 | 1500 | 0.8226 | {'accuracy': 0.882} | | 0.0849 | 7.0 | 1750 | 0.7980 | {'accuracy': 0.887} | | 0.0183 | 8.0 | 2000 | 0.8676 | {'accuracy': 0.889} | | 0.0183 | 9.0 | 2250 | 0.8728 | {'accuracy': 0.897} | | 0.016 | 10.0 | 2500 | 0.8460 | {'accuracy': 0.897} | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.1
LoneStriker/Midnight-Rose-70B-v1.0-3.5bpw-h6-exl2
LoneStriker
2024-01-28T11:16:08Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2307.11760", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T11:02:59Z
--- license: llama2 language: - en --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```
Ben141/LLM21
Ben141
2024-01-28T11:12:08Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-28T10:56:26Z
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: LLM21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLM21 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 120 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
prajjusy/finetuned-flan-t5-base-10
prajjusy
2024-01-28T11:05:41Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-base", "base_model:adapter:google/flan-t5-base", "region:us" ]
null
2024-01-28T10:54:30Z
--- library_name: peft base_model: google/flan-t5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
lambdavi/ddpg-PandaReach-v3
lambdavi
2024-01-28T10:55:01Z
0
0
null
[ "PandaReach-v3", "ddpg", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-28T09:20:56Z
--- tags: - PandaReach-v3 - ddpg - reinforcement-learning - custom-implementation model-index: - name: ddpg-PandaReach-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReach-v3 type: PandaReach-v3 metrics: - type: mean_reward value: -1.68 +/- 0.81 name: mean_reward verified: false --- # **DDPG** Agent playing **PandaReach-v3** This is a trained model of a **DDPG** agent playing **PandaReach-v3**. ## Hyperparameters: ``` hyperparameters = { "env_id": "PandaReach-v3", "max_steps": 50000, "n_training_episodes": 9624, "n_eval_episodes": 3000, "learning_rate": 0.001, } ```
MarinaraSpaghetti/Doctor-Shotgun_Nous-Capybara-limarpv3-34B-4.2bpw-h6-exl2
MarinaraSpaghetti
2024-01-28T10:54:21Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "roleplay", "text-generation-inference", "dataset:lemonilia/LimaRP", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T09:49:27Z
--- datasets: - lemonilia/LimaRP library_name: transformers tags: - roleplay - text-generation-inference --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> My first exl2 quant of my favourite go-to roleplaying model. Can fit into my empty 24GB VRAM with 32k context in 8-bit cache. Might do a 4.25bpw quant later. Original model: https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B Prompt format: https://github.com/tatsu-lab/stanford_alpaca
ryusangwon/bart-samsum2
ryusangwon
2024-01-28T10:40:47Z
4
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:samsum", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-28T10:29:23Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: rlqaf results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: validation args: samsum metrics: - name: Rouge1 type: rouge value: 0.4864 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rlqaf This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 0.5315 - Rouge1: 0.4864 - Rouge2: 0.2554 - Rougel: 0.4099 - Rougelsum: 0.4099 - Gen Len: 18.2457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.5336 | 4.34 | 500 | 0.5418 | 0.4838 | 0.2529 | 0.4106 | 0.4104 | 18.2751 | | 0.4117 | 8.69 | 1000 | 0.5315 | 0.4864 | 0.2554 | 0.4099 | 0.4099 | 18.2457 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
hiiamsid/yi_34B_8k_classification
hiiamsid
2024-01-28T10:35:32Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:01-ai/Yi-34B", "base_model:finetune:01-ai/Yi-34B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T16:57:23Z
--- license: other base_model: 01-ai/Yi-34B tags: - generated_from_trainer model-index: - name: yi_34B_8k_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # yi_34B_8k_classification This model is a fine-tuned version of [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2209 | 1.0 | 223 | 0.1886 | | 0.232 | 2.0 | 446 | 0.1809 | | 0.1667 | 3.0 | 669 | 0.1806 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.1
Medo3110/my_awesome_model
Medo3110
2024-01-28T10:26:34Z
96
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-21T23:56:35Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1983 - Accuracy: 0.9298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2962 | 1.0 | 782 | 0.2442 | 0.9048 | | 0.149 | 2.0 | 1564 | 0.1983 | 0.9298 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
aydengalerie/aydenlaroi
aydengalerie
2024-01-28T10:25:14Z
0
0
null
[ "license:other", "region:us" ]
null
2024-01-28T10:22:29Z
--- license: other license_name: laroi license_link: >- https://drive.google.com/file/d/1jbGNYBqQgrY2zIwxm3No5G82O7u4zIl3/view?usp=drive_link ---
zhangHarry/orca_mini_3b_summary-epoch_1
zhangHarry
2024-01-28T10:15:10Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:nomic-ai/gpt4all-falcon", "base_model:adapter:nomic-ai/gpt4all-falcon", "region:us" ]
null
2024-01-20T04:13:49Z
--- library_name: peft base_model: nomic-ai/gpt4all-falcon --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
yunconglong/Mixtral_7Bx2_MoE_13B_DPO
yunconglong
2024-01-28T10:05:32Z
50
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T00:02:18Z
--- license: cc-by-nc-4.0 tags: - moe --- # Mixtral MOE 2x7B MOE the following models by mergekit and then fine tuned by DPO. * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k) * [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1)
tejasnayak25/cat-generator
tejasnayak25
2024-01-28T10:05:28Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-28T10:01:18Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Cat-Generator Dreambooth model trained by tejasnayak25 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: C48 Sample pictures of this concept: ![0](https://huggingface.co/tejasnayak25/cat-generator/resolve/main/sample_images/gen1.png) ![1](https://huggingface.co/tejasnayak25/cat-generator/resolve/main/sample_images/gen2.png) ![2](https://huggingface.co/tejasnayak25/cat-generator/resolve/main/sample_images/gen3.png)
vpgits/Mistral-7B-v0.1-qagen-v2.0
vpgits
2024-01-28T09:53:23Z
2
0
peft
[ "peft", "safetensors", "text-generation", "en", "dataset:vpgits/SDGP_Qagen", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T08:26:45Z
--- license: mit datasets: - vpgits/SDGP_Qagen language: - en pipeline_tag: text-generation library_name: peft --- license: mit --- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
Weyaxi
2024-01-28T09:48:30Z
1,554
26
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-21T18:14:58Z
--- license: cc-by-nc-4.0 tags: - merge model-index: - name: SauerkrautLM-UNA-SOLAR-Instruct results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.3 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.15 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.8 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 64.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/8uLgxLFWSN4fGPCS8Qinq.png) # SauerkrautLM-UNA-SOLAR-Instruct This is the model for SauerkrautLM-UNA-SOLAR-Instruct. I used [mergekit](https://github.com/cg123/mergekit) to merge models. 🥳 As of **December 24 2023**, this model holds the **first place position** on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). <h2><details><summary>Screenshot</summary><img src=https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cVhjAJhuPoNgHo7CDCmA-.png></img></details></h2> # Prompt Template(s) ``` ### User: {user} ### Assistant: {asistant} ``` # Yaml Config to reproduce ```yaml slices: - sources: - model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct layer_range: [0, 48] - model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 layer_range: [0, 48] merge_method: slerp base_model: upstage/SOLAR-10.7B-Instruct-v1.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors tokenizer_source: union dtype: bfloat16 ``` # Quantizationed versions Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke). ##### GPTQ - [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ) ##### GGUF - [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF) ##### AWQ - [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SauerkrautLM-UNA-SOLAR-Instruct) | Metric |Value| |---------------------------------|----:| |Avg. |74.26| |AI2 Reasoning Challenge (25-Shot)|70.90| |HellaSwag (10-Shot) |88.30| |MMLU (5-Shot) |66.15| |TruthfulQA (0-shot) |71.80| |Winogrande (5-shot) |83.74| |GSM8k (5-shot) |64.67| If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
Oztobuzz/Simcse_test_banking
Oztobuzz
2024-01-28T09:41:33Z
52
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-27T10:18:37Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Oztobuzz/Simcse_test_banking This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Oztobuzz/Simcse_test_banking') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Oztobuzz/Simcse_test_banking') model = AutoModel.from_pretrained('Oztobuzz/Simcse_test_banking') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Oztobuzz/Simcse_test_banking) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 45 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 5e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 45, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Runetistic/Osrsbuilder
Runetistic
2024-01-28T09:37:29Z
0
0
adapter-transformers
[ "adapter-transformers", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:HuggingFaceM4/WebSight", "dataset:litagin/moe-speech", "dataset:Tele-AI/TeleChat-PTD", "license:afl-3.0", "region:us" ]
null
2024-01-28T09:34:44Z
--- license: afl-3.0 datasets: - fka/awesome-chatgpt-prompts - HuggingFaceM4/WebSight - litagin/moe-speech - Tele-AI/TeleChat-PTD language: - en metrics: - accuracy - character library_name: adapter-transformers ---
jaindeepali010/clinical_ner_miimansa_G1_model
jaindeepali010
2024-01-28T09:17:42Z
1
0
transformers
[ "transformers", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-28T08:05:30Z
This model is a clinical NER model finetuned using bert-base-uncased model, trained on G1 dataset. Training and validation was done using 80% of the total data (random state=42), while 20% used for testing. The model was trained for 20 epoch with an early stopping patience of 3 epochs.
yukihirop/distilbert-base-uncased-finetuned-squad-d5716d28
yukihirop
2024-01-28T09:10:10Z
95
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2024-01-28T07:34:44Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lgilz/code-llama-7b-text-to-sql
lgilz
2024-01-28T09:05:14Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-01-28T07:55:53Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: codellama/CodeLlama-7b-hf model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
alnrg2arg/test3_sft_16bit_dpo2
alnrg2arg
2024-01-28T09:00:14Z
13
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:Intel/orca_dpo_pairs", "base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4", "base_model:finetune:alnrg2arg/blockchainlabs_7B_merged_test2_4", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T19:19:27Z
--- language: - en license: cc-by-nc-4.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4 datasets: - Intel/orca_dpo_pairs --- This is a model from blockchainlab test 2.4 - alnrg2arg/blockchainlabs_7B_merged_test2_4. The project is running to make a small LLM for a on-device purpose. Overall pipeline for this iteration is 1.Merging to make a base model (7B) 2.Prune the model to reduce the parameter (50% sparcity) 3.For recovery phase of the pruning, the DPO is chosen. This model which is not pruned is intended to compare with the pruned model. This is the code and parameters I chose for this model(DPO). ``` from transformers import TrainingArguments, AutoModelForCausalLM from trl import DPOTrainer dpo_trainer = DPOTrainer( model = model, ref_model = None, args = TrainingArguments( per_device_train_batch_size = 8, gradient_accumulation_steps = 8, warmup_ratio = 0.1, num_train_epochs = 3, learning_rate = 5e-6, fp16 = not torch.cuda.is_bf16_supported(), bf16 = torch.cuda.is_bf16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.0, lr_scheduler_type = "linear", seed = 42, output_dir = "output_DPO", ), beta = 0.1, train_dataset = dataset, # eval_dataset = raw_datasets["test"], tokenizer = tokenizer, max_length = 1024, max_prompt_length = 512, ) ``` The code and parameters are borrowed from https://colab.research.google.com/drive/1SKrKGV-BZoU4kv5q3g0jtE_OhRgPtrrQ?usp=sharing Benchmark Scores | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|------:|------|-----:|--------|-----:|---|-----:| |arc_challenge| 1|none | 0|acc |0.6894|± |0.0135| | | |none | 0|acc_norm|0.6860|± |0.0136| | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|------:|------|-----:|--------|-----:|---|-----:| |hellaswag| 1|none | 0|acc |0.7092|± |0.0045| | | |none | 0|acc_norm|0.8736|± |0.0033| | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|------:|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2| 2|none | 0|acc |0.7126|± | 0.015| | Groups |Version|Filter|n-shot|Metric|Value | |Stderr| |------------------|-------|------|-----:|------|-----:|---|-----:| |mmlu |N/A |none | 0|acc |0.6225|± |0.1292| | - humanities |N/A |none | 0|acc |0.5745|± |0.1286| | - other |N/A |none | 0|acc |0.6952|± |0.1095| | - social_sciences|N/A |none | 0|acc |0.7280|± |0.0735| | - stem |N/A |none | 0|acc |0.5195|± |0.1313| | Tasks |Version|Filter|n-shot|Metric|Value| |Stderr| |----------|------:|------|-----:|------|----:|---|-----:| |winogrande| 1|none | 0|acc |0.824|± |0.0107| |Tasks|Version| Filter |n-shot| Metric |Value | |Stderr| |-----|------:|----------|-----:|-----------|-----:|---|-----:| |gsm8k| 2|get-answer| 5|exact_match|0.7263|± |0.0123| Average = 74.08
torrikabe/PPY
torrikabe
2024-01-28T08:52:47Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-06-18T11:33:10Z
--- license: creativeml-openrail-m ---
stilletto/AlbedoBaseXLv2.0
stilletto
2024-01-28T08:47:46Z
1
0
diffusers
[ "diffusers", "safetensors", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-01-26T07:59:34Z
--- license: apache-2.0 --- From Civitai AlbedoBase XL v2.0 The refiner is unnecessary, and VAE is included. Leaving the negative prompt empty generally brings about the best quality. As of now, AlbedoBase XL v1.3 has merged exactly 141 selected checkpoints and 251 LoRAs.
MohamedAAK/my_awesome_power_model_llm
MohamedAAK
2024-01-28T08:19:42Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:MohamedAAK/my_awesome_power_model_llm", "base_model:finetune:MohamedAAK/my_awesome_power_model_llm", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T14:06:44Z
--- license: apache-2.0 base_model: MohamedAAK/my_awesome_power_model_llm tags: - generated_from_keras_callback model-index: - name: my_awesome_power_model_llm results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_power_model_llm This model is a fine-tuned version of [MohamedAAK/my_awesome_power_model_llm](https://huggingface.co/MohamedAAK/my_awesome_power_model_llm) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
kaitchup/Mayonnaise-4in1-022
kaitchup
2024-01-28T08:12:39Z
78
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-27T23:16:55Z
--- license: apache-2.0 language: - en tags: - merge library_name: transformers --- **Warning: This model is ranked first on the Open LLM Leaderboard (among the 7B models) (January 28th, 2024). However, note that this model was produced from many merges. I didn't fine-tune any of the models that I merged and I couldn't confirm that none of them have been trained on the evaluation benchmarks.** # Model Card for Model ID This is a mixture of experts created with [mergekit](https://github.com/cg123/mergekit) and based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **Model type:** Causal - **Language(s) (NLP):** English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Model Sources Created with mergekit with this configuration: ``` models: - model: mncai/mistral-7b-dpo-v5 # no parameters necessary for base model - model: FelixChao/WestSeverus-7B-DPO-v2 parameters: density: 0.5 weight: 0.3 - model: BarryFutureman/NeuralTurdusVariant1-7B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: mncai/mistral-7b-dpo-v5 parameters: normalize: true dtype: float16 ```
Crystalcareai/CrystalMistralv1
Crystalcareai
2024-01-28T08:04:53Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Crystalcareai/CrystalMistralv.03-fixed", "Crystalcareai/CrystalMistral-GPT4", "base_model:Crystalcareai/CrystalMistral-GPT4", "base_model:merge:Crystalcareai/CrystalMistral-GPT4", "base_model:Crystalcareai/CrystalMistralv.03-fixed", "base_model:merge:Crystalcareai/CrystalMistralv.03-fixed", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T08:00:12Z
--- tags: - merge - mergekit - lazymergekit - Crystalcareai/CrystalMistralv.03-fixed - Crystalcareai/CrystalMistral-GPT4 base_model: - Crystalcareai/CrystalMistralv.03-fixed - Crystalcareai/CrystalMistral-GPT4 --- # CrystalMistralv1 CrystalMistralv1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Crystalcareai/CrystalMistralv.03-fixed](https://huggingface.co/Crystalcareai/CrystalMistralv.03-fixed) * [Crystalcareai/CrystalMistral-GPT4](https://huggingface.co/Crystalcareai/CrystalMistral-GPT4) ## 🧩 Configuration ```yaml slices: - sources: - model: Crystalcareai/CrystalMistralv.03-fixed layer_range: [0, 32] - model: Crystalcareai/CrystalMistral-GPT4 layer_range: [0, 32] merge_method: slerp base_model: Crystalcareai/CrystalMistralv.03-fixed parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Crystalcareai/CrystalMistralv1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
prajjusy/finetuned-flan-t5-base-7
prajjusy
2024-01-28T08:02:34Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-base", "base_model:adapter:google/flan-t5-base", "region:us" ]
null
2024-01-28T08:02:30Z
--- library_name: peft base_model: google/flan-t5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
Kapiche/twitter-roberta-base-sentiment
Kapiche
2024-01-28T08:01:42Z
271
0
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "text-classification", "en", "dataset:tweet_eval", "arxiv:2010.12421", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-28T07:40:48Z
--- datasets: - tweet_eval language: - en --- # Twitter-roBERTa-base for Sentiment Analysis This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)). - Reference Paper: [_TweetEval_ (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). <b>Labels</b>: 0 -> Negative; 1 -> Neutral; 2 -> Positive <b>New!</b> We just released a new sentiment analysis model trained on more recent and a larger quantity of tweets. See [twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) and [TweetNLP](https://tweetnlp.org) for more details. ## Example of classification ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='sentiment' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Good night 😊" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = labels[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) positive 0.8466 2) neutral 0.1458 3) negative 0.0076 ``` ### BibTeX entry and citation info Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model. ```bibtex @inproceedings{barbieri-etal-2020-tweeteval, title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification", author = "Barbieri, Francesco and Camacho-Collados, Jose and Espinosa Anke, Luis and Neves, Leonardo", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.148", doi = "10.18653/v1/2020.findings-emnlp.148", pages = "1644--1650" } ```
prajjusy/finetuned-flan-t5-base-6
prajjusy
2024-01-28T07:51:53Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-base", "base_model:adapter:google/flan-t5-base", "region:us" ]
null
2024-01-28T07:51:52Z
--- library_name: peft base_model: google/flan-t5-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/WestLake-7B-v2-laser-truthy-dpo-5.0bpw-h6-exl2
LoneStriker
2024-01-28T07:48:59Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T07:46:19Z
--- library_name: transformers license: apache-2.0 --- # WestLake-7B-v2-laser-truthy-dpo ![westlake-header](westlake-header.png) ## Process + Trained [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) on jondurbin/truthy-dpo-v0.1 + Completed 2 epochs + 2e-5 learning rate ## Evaluations This model is experimental and this finetune may or may not retain its original intentions. <pre>----Benchmark Complete---- 2024-01-27 16:44:07 Time taken: 29.6 mins Prompt Format: Mistral Model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo Score (v2): 73.39 Parseable: 169.0 --------------- Batch completed Time taken: 29.6 mins --------------- </pre> ## GGUF GGUF versions are available [here](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF)
LoneStriker/WestLake-7B-v2-laser-truthy-dpo-3.0bpw-h6-exl2
LoneStriker
2024-01-28T07:44:16Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T07:42:46Z
--- library_name: transformers license: apache-2.0 --- # WestLake-7B-v2-laser-truthy-dpo ![westlake-header](westlake-header.png) ## Process + Trained [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) on jondurbin/truthy-dpo-v0.1 + Completed 2 epochs + 2e-5 learning rate ## Evaluations This model is experimental and this finetune may or may not retain its original intentions. <pre>----Benchmark Complete---- 2024-01-27 16:44:07 Time taken: 29.6 mins Prompt Format: Mistral Model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo Score (v2): 73.39 Parseable: 169.0 --------------- Batch completed Time taken: 29.6 mins --------------- </pre> ## GGUF GGUF versions are available [here](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF)
Cyborg-AI/mistralai-Code-Instruct-Finetune-test
Cyborg-AI
2024-01-28T07:38:17Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-28T07:34:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jeyong/SOLAR-10.7B-dpo-v1-awq
Jeyong
2024-01-28T07:38:09Z
62
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR-10.7B", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-01-28T07:10:21Z
--- language: - ko pipeline_tag: text-generation tags: - SOLAR-10.7B license: apache-2.0 --- # SOLAR-10.7B ### Model Details - Base Model: [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1) ### Quantization - AWQ applied using following parameters. - zero_point: True - q_group_size: 128 - w_bit: 4 - version: GEMM
LoneStriker/WestLake-7B-v2-laser-truthy-dpo-GGUF
LoneStriker
2024-01-28T07:36:10Z
4
3
transformers
[ "transformers", "gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-28T06:56:44Z
--- library_name: transformers license: apache-2.0 --- # WestLake-7B-v2-laser-truthy-dpo ![westlake-header](westlake-header.png) ## Process + Trained [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) on jondurbin/truthy-dpo-v0.1 + Completed 2 epochs + 2e-5 learning rate ## Evaluations This model is experimental and this finetune may or may not retain its original intentions. <pre>----Benchmark Complete---- 2024-01-27 16:44:07 Time taken: 29.6 mins Prompt Format: Mistral Model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo Score (v2): 73.39 Parseable: 169.0 --------------- Batch completed Time taken: 29.6 mins --------------- </pre> ## GGUF GGUF versions are available [here](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF)