modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 00:41:25
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 00:41:08
card
stringlengths
11
1.01M
CISCai/Codestral-22B-v0.1-SOTA-GGUF
CISCai
2024-06-04T20:27:31Z
42
0
null
[ "gguf", "code", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "base_model:mistralai/Codestral-22B-v0.1", "base_model:quantized:mistralai/Codestral-22B-v0.1", "license:other", "region:us", "imatrix", "conversational" ]
null
2024-05-30T18:55:37Z
--- inference: false license: other license_name: mnpl license_link: https://mistral.ai/licenses/MNPL-0.1.md tags: - code language: - code base_model: mistralai/Codestral-22B-v0.1 model_creator: Mistral AI model_name: Codestral-22B-v0.1 model_type: mistral datasets: - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: CISC --- # Codestral-22B-v0.1 - SOTA GGUF - Model creator: [Mistral AI](https://huggingface.co/mistralai) - Original model: [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1) <!-- description start --> ## Description This repo contains State Of The Art quantized GGUF format model files for [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1). Quantization was done with an importance matrix that was trained for ~1M tokens (256 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset. The embedded chat template has been extended to support function calling via OpenAI-compatible `tools` parameter and Fill-in-Middle token metadata has been added, see [example](#simple-llama-cpp-python-example-fill-in-middle-code). NOTE: Mistral's FIM requires support for [SPM infill mode](https://github.com/abetlen/llama-cpp-python/pull/1492)! <!-- description end --> <!-- prompt-template start --> ## Prompt template: Mistral v3 ``` [AVAILABLE_TOOLS] [{"name": "function_name", "description": "Description", "parameters": {...}}, ...][/AVAILABLE_TOOLS][INST] {prompt}[/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv3 files are compatible with llama.cpp from February 27th 2024 onwards, as of commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw) * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Codestral-22B-v0.1.IQ1_S.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ1_S.gguf) | IQ1_S | 1 | 4.3 GB| 5.3 GB | smallest, significant quality loss - **TBD**: Waiting for [this issue](https://github.com/ggerganov/llama.cpp/issues/5996) to be resolved | | [Codestral-22B-v0.1.IQ1_M.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ1_M.gguf) | IQ1_M | 1 | 4.8 GB| 5.8 GB | very small, significant quality loss | | [Codestral-22B-v0.1.IQ2_XXS.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ2_XXS.gguf) | IQ2_XXS | 2 | 5.4 GB| 6.4 GB | very small, high quality loss | | [Codestral-22B-v0.1.IQ2_XS.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ2_XS.gguf) | IQ2_XS | 2 | 6.0 GB| 7.0 GB | very small, high quality loss | | [Codestral-22B-v0.1.IQ2_S.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ2_S.gguf) | IQ2_S | 2 | 6.4 GB| 7.4 GB | small, substantial quality loss | | [Codestral-22B-v0.1.IQ2_M.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ2_M.gguf) | IQ2_M | 2 | 6.9 GB| 7.9 GB | small, greater quality loss | | [Codestral-22B-v0.1.IQ3_XXS.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ3_XXS.gguf) | IQ3_XXS | 3 | 7.9 GB| 8.9 GB | very small, high quality loss | | [Codestral-22B-v0.1.IQ3_XS.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ3_XS.gguf) | IQ3_XS | 3 | 8.4 GB| 9.4 GB | small, substantial quality loss | | [Codestral-22B-v0.1.IQ3_S.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ3_S.gguf) | IQ3_S | 3 | 8.9 GB| 9.9 GB | small, greater quality loss | | [Codestral-22B-v0.1.IQ3_M.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ3_M.gguf) | IQ3_M | 3 | 9.2 GB| 10.2 GB | medium, balanced quality - recommended | | [Codestral-22B-v0.1.IQ4_XS.gguf](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.IQ4_XS.gguf) | IQ4_XS | 4 | 11.5 GB| 12.5 GB | small, substantial quality loss | Generated importance matrix file: [Codestral-22B-v0.1.imatrix.dat](https://huggingface.co/CISCai/Codestral-22B-v0.1-SOTA-GGUF/blob/main/Codestral-22B-v0.1.imatrix.dat) **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) or later. ```shell ./main -ngl 57 -m Codestral-22B-v0.1.IQ4_XS.gguf --color -c 32768 --temp 0 --repeat-penalty 1.1 -p "[AVAILABLE_TOOLS] {tools}[/AVAILABLE_TOOLS][INST] {prompt}[/INST]" ``` Change `-ngl 57` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size). There is a similar option for V-cache (`-ctv`), however that is [not working yet](https://github.com/ggerganov/llama.cpp/issues/4425). For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/). #### First install the package Run one of the following commands, according to your system: ```shell # Prebuilt wheel with basic CPU support pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu # Prebuilt wheel with NVidia CUDA acceleration pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.) # Prebuilt wheel with Metal GPU acceleration pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal # Build base version with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # Or with Vulkan acceleration CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python # Or with Kompute acceleration CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python # Or with SYCL acceleration CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_CUDA=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Chat Completion API llm = Llama(model_path="./Codestral-22B-v0.1.IQ4_XS.gguf", n_gpu_layers=57, n_ctx=32768) print(llm.create_chat_completion( repeat_penalty = 1.1, messages = [ { "role": "user", "content": "Pick a LeetCode challenge and solve it in Python." } ] )) ``` #### Simple llama-cpp-python example fill-in-middle code ```python from llama_cpp import Llama # Completion API prompt = "def add(" suffix = "\n return sum\n\n" llm = Llama(model_path="./Codestral-22B-v0.1.IQ4_XS.gguf", n_gpu_layers=57, n_ctx=32768, spm_infill=True) output = llm.create_completion( temperature = 0.0, repeat_penalty = 1.0, prompt = prompt, suffix = suffix ) # Models sometimes repeat suffix in response, attempt to filter that response = output["choices"][0]["text"] response_stripped = response.rstrip() unwanted_response_suffix = suffix.rstrip() unwanted_response_length = len(unwanted_response_suffix) filtered = False if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix: response = response_stripped[:-unwanted_response_length] filtered = True print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[0m{suffix}") ``` #### Simple llama-cpp-python example function calling code ```python from llama_cpp import Llama # Chat Completion API llm = Llama(model_path="./Codestral-22B-v0.1.IQ4_XS.gguf", n_gpu_layers=57, n_ctx=32768) print(llm.create_chat_completion( temperature = 0.0, repeat_penalty = 1.1, messages = [ { "role": "user", "content": "In a physics experiment, you are given an object with a mass of 50 kilograms and a volume of 10 cubic meters. Can you use the 'calculate_density' function to determine the density of this object?" }, { # The tool_calls is from the response to the above with tool_choice active "role": "assistant", "content": None, "tool_calls": [ { "id": "call__0_calculate_density_cmpl-...", "type": "function", "function": { "name": "calculate_density", "arguments": '{"mass": "50", "volume": "10"}' } } ] }, { # The tool_call_id is from tool_calls and content is the result from the function call you made "role": "tool", "content": "5.0", "tool_call_id": "call__0_calculate_density_cmpl-..." } ], tools=[{ "type": "function", "function": { "name": "calculate_density", "description": "Calculates the density of an object.", "parameters": { "type": "object", "properties": { "mass": { "type": "integer", "description": "The mass of the object." }, "volume": { "type": "integer", "description": "The volume of the object." } }, "required": [ "mass", "volume" ] } } }], #tool_choice={ # "type": "function", # "function": { # "name": "calculate_density" # } #} )) ``` <!-- README_GGUF.md-how-to-run end --> <!-- original-model-card start --> # Model Card for Codestral-22B-v0.1 Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried: - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code) ## Installation It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference). ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256 ``` Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines: ``` Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number. fn fibonacci(n: u32) -> u32 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { let n = 10; println!("The {}th Fibonacci number is: {}", n, fibonacci(n)); } This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers. ``` ### Fill-in-the-middle (FIM) After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed: ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.request import FIMRequest tokenizer = MistralTokenizer.v3() model = Transformer.from_folder("~/codestral-22B-240529") prefix = """def add(""" suffix = """ return sum""" request = FIMRequest(prompt=prefix, suffix=suffix) tokens = tokenizer.encode_fim(request).tokens out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) middle = result.split(suffix)[0].strip() print(middle) ``` Should give something along the following lines: ``` num1, num2): # Add two numbers sum = num1 + num2 # return the sum ``` ## Limitations The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Codestral-22B-v0.1 is released under the `MNLP-0.1` license. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
mariadjadi/fine_tuned_mistral_legal_V2
mariadjadi
2024-06-04T20:20:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-30T18:50:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/phi3-stellargalaxy8-merged-GGUF
mradermacher
2024-06-04T20:12:47Z
9
0
transformers
[ "transformers", "gguf", "en", "base_model:zachaman/phi3-stellargalaxy8-merged", "base_model:quantized:zachaman/phi3-stellargalaxy8-merged", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-04T19:54:23Z
--- base_model: zachaman/phi3-stellargalaxy8-merged language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zachaman/phi3-stellargalaxy8-merged <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.IQ3_XS.gguf) | IQ3_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.IQ3_M.gguf) | IQ3_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q3_K_L.gguf) | Q3_K_L | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q5_K_M.gguf) | Q5_K_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/phi3-stellargalaxy8-merged-GGUF/resolve/main/phi3-stellargalaxy8-merged.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
research-dump/Llama-2-13b-chat-hf_mixed_sft_timeline_forward_no_instruction
research-dump
2024-06-04T20:08:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-03T11:20:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tyzhu/find_marker_both_sent_train_400_eval_40_first_permute_Qwen_Qwen1.5-4B_3e-4_lora
tyzhu
2024-06-04T20:04:11Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen1.5-4B", "base_model:adapter:Qwen/Qwen1.5-4B", "license:other", "region:us" ]
null
2024-06-04T14:22:57Z
--- license: other base_model: Qwen/Qwen1.5-4B tags: - generated_from_trainer metrics: - accuracy model-index: - name: find_marker_both_sent_train_400_eval_40_first_permute_Qwen_Qwen1.5-4B_3e-4_lora results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # find_marker_both_sent_train_400_eval_40_first_permute_Qwen_Qwen1.5-4B_3e-4_lora This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3150 - Accuracy: 0.7659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 50.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5824 | 0.9933 | 130 | 1.1797 | 0.6851 | | 0.7977 | 1.9943 | 261 | 0.5359 | 0.7429 | | 0.3387 | 2.9952 | 392 | 0.3361 | 0.7614 | | 0.1537 | 3.9962 | 523 | 0.2855 | 0.7653 | | 0.1389 | 4.9971 | 654 | 0.2712 | 0.7666 | | 0.1383 | 5.9981 | 785 | 0.2502 | 0.7676 | | 0.1252 | 6.9990 | 916 | 0.2457 | 0.7684 | | 0.122 | 8.0 | 1047 | 0.2310 | 0.7694 | | 0.1169 | 8.9933 | 1177 | 0.2316 | 0.7689 | | 0.1167 | 9.9943 | 1308 | 0.2311 | 0.7699 | | 0.1161 | 10.9952 | 1439 | 0.2159 | 0.7708 | | 0.1126 | 11.9962 | 1570 | 0.2188 | 0.7694 | | 0.1088 | 12.9971 | 1701 | 0.2270 | 0.7661 | | 0.1104 | 13.9981 | 1832 | 0.2181 | 0.7677 | | 0.1076 | 14.9990 | 1963 | 0.2135 | 0.7680 | | 0.1069 | 16.0 | 2094 | 0.2219 | 0.7670 | | 0.1048 | 16.9933 | 2224 | 0.2298 | 0.7668 | | 0.1044 | 17.9943 | 2355 | 0.2341 | 0.7666 | | 0.1061 | 18.9952 | 2486 | 0.2628 | 0.7660 | | 0.1104 | 19.9962 | 2617 | 0.2712 | 0.7651 | | 0.1111 | 20.9971 | 2748 | 0.2921 | 0.7652 | | 0.1102 | 21.9981 | 2879 | 0.2700 | 0.7660 | | 0.1049 | 22.9990 | 3010 | 0.2905 | 0.7662 | | 0.1024 | 24.0 | 3141 | 0.2852 | 0.7664 | | 0.1079 | 24.9933 | 3271 | 0.2418 | 0.7653 | | 0.1066 | 25.9943 | 3402 | 0.2759 | 0.7662 | | 0.1054 | 26.9952 | 3533 | 0.2958 | 0.7656 | | 0.105 | 27.9962 | 3664 | 0.3109 | 0.7663 | | 0.1066 | 28.9971 | 3795 | 0.3062 | 0.7660 | | 0.1048 | 29.9981 | 3926 | 0.2714 | 0.7660 | | 0.1043 | 30.9990 | 4057 | 0.2821 | 0.7662 | | 0.1039 | 32.0 | 4188 | 0.2961 | 0.7661 | | 0.1055 | 32.9933 | 4318 | 0.2942 | 0.7662 | | 0.1045 | 33.9943 | 4449 | 0.3152 | 0.7659 | | 0.1045 | 34.9952 | 4580 | 0.2828 | 0.7666 | | 0.1038 | 35.9962 | 4711 | 0.2355 | 0.7662 | | 0.102 | 36.9971 | 4842 | 0.2926 | 0.7664 | | 0.103 | 37.9981 | 4973 | 0.2825 | 0.7660 | | 0.1061 | 38.9990 | 5104 | 0.2899 | 0.7663 | | 0.1064 | 40.0 | 5235 | 0.2930 | 0.7660 | | 0.105 | 40.9933 | 5365 | 0.2806 | 0.7657 | | 0.1038 | 41.9943 | 5496 | 0.2973 | 0.7664 | | 0.1016 | 42.9952 | 5627 | 0.3379 | 0.7662 | | 0.1046 | 43.9962 | 5758 | 0.3200 | 0.7655 | | 0.1039 | 44.9971 | 5889 | 0.3151 | 0.7652 | | 0.107 | 45.9981 | 6020 | 0.2969 | 0.7658 | | 0.1059 | 46.9990 | 6151 | 0.3146 | 0.7659 | | 0.1058 | 48.0 | 6282 | 0.3070 | 0.7656 | | 0.103 | 48.9933 | 6412 | 0.3060 | 0.7660 | | 0.1012 | 49.6657 | 6500 | 0.3150 | 0.7659 | ### Framework versions - PEFT 0.5.0 - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Masioki/gttbsc_phi-freezed-best
Masioki
2024-06-04T20:03:51Z
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "single-embedding-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-31T10:59:14Z
--- tags: - generated_from_trainer model-index: - name: gttbsc_phi-freezed-best results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: 65.66 - name: F1 macro GT type: F1 macro value: 69.97 datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gttbsc_phi-freezed-best Ground truth text based multi-label DAC ## Model description Backbone: [Phi 3 mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) Pooling: Weighted mean pooling Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ground truth. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
rmalik95/lora_model_TEST
rmalik95
2024-06-04T20:02:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-04T20:02:17Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** rmalik95 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aymanboufarhi/chat-bot2B-fstt
aymanboufarhi
2024-06-04T20:02:21Z
141
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T19:59:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
candrews1971/ppo-LunarLander-v2
candrews1971
2024-06-04T19:54:05Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T19:53:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 287.79 +/- 15.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Reihaneh/wav2vec2_fy_common_voice_28
Reihaneh
2024-06-04T19:52:46Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-04T09:45:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LucasSantiago257/gemma-2b-8bits-gptq
LucasSantiago257
2024-06-04T19:48:54Z
78
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2024-06-04T19:39:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r1208/Llama-3-Open-Ko-8B-Instruct-preview_8bit_128r
r1208
2024-06-04T19:48:09Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-04T19:34:15Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hevagog/a2c-PandaReachDense-v3
Hevagog
2024-06-04T19:45:11Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T19:40:50Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.17 +/- 0.07 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF
bartowski
2024-06-04T19:43:11Z
81
0
null
[ "gguf", "text-generation", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-04T19:24:32Z
--- license: llama3 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Awanllm-Llama-3-8B-Dolfin-v1.0 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3070">b3070</a> for quantization. Original model: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v1.0 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> <|eot_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q8_0.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q6_K.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q5_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q5_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q4_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q4_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ4_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_L.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_XXS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-Q2_K.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF/blob/main/Awanllm-Llama-3-8B-Dolfin-v1.0-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF --include "Awanllm-Llama-3-8B-Dolfin-v1.0-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF --include "Awanllm-Llama-3-8B-Dolfin-v1.0-Q8_0.gguf/*" --local-dir Awanllm-Llama-3-8B-Dolfin-v1.0-Q8_0 ``` You can either specify a new local-dir (Awanllm-Llama-3-8B-Dolfin-v1.0-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
welsachy/bert-base-uncased-finetuned-depression
welsachy
2024-06-04T19:39:57Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T19:39:38Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: model_checkpoints results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_checkpoints This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6941 - Precision: 0.6667 - Recall: 0.6667 - F1: 0.6667 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 225 | 0.7271 | 0.6589 | 0.6589 | 0.6589 | 0.6589 | | No log | 2.0 | 450 | 0.6941 | 0.6667 | 0.6667 | 0.6667 | 0.6667 | | 0.7284 | 3.0 | 675 | 0.7404 | 0.6656 | 0.6656 | 0.6656 | 0.6656 | | 0.7284 | 4.0 | 900 | 0.8450 | 0.6622 | 0.6622 | 0.6622 | 0.6622 | | 0.4293 | 5.0 | 1125 | 0.9263 | 0.6567 | 0.6567 | 0.6567 | 0.6567 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Enpas/small-trsc-3
Enpas
2024-06-04T19:37:34Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-03T19:18:11Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer model-index: - name: small-Cotrsc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-Cotrsc This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0487 - eval_wer: 39.6655 - eval_runtime: 516.4929 - eval_samples_per_second: 0.67 - eval_steps_per_second: 0.085 - epoch: 0.4231 - step: 1200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1200 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
mradermacher/Mixtral_AI_ARCHIVE-GGUF
mradermacher
2024-06-04T19:36:18Z
23
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "endpoints_compatible", "region:us" ]
null
2024-06-04T19:10:44Z
--- base_model: LeroyDyer/Mixtral_AI_ARCHIVE language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_ARCHIVE <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE-GGUF/resolve/main/Mixtral_AI_ARCHIVE.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF
mradermacher
2024-06-04T19:34:11Z
107
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B", "powermove72/GK-inv-MoE-0.1", "en", "base_model:powermove72/FusionNotus-Gk-MoE-13b-slerp", "base_model:quantized:powermove72/FusionNotus-Gk-MoE-13b-slerp", "endpoints_compatible", "region:us" ]
null
2024-06-04T18:47:55Z
--- base_model: powermove72/FusionNotus-Gk-MoE-13b-slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B - powermove72/GK-inv-MoE-0.1 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/FusionNotus-Gk-MoE-13b-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FusionNotus-Gk-MoE-13b-slerp-GGUF/resolve/main/FusionNotus-Gk-MoE-13b-slerp.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
sally9805/bert-base-uncased-finetuned-news-1915
sally9805
2024-06-04T19:33:53Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-03-01T05:46:05Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-uncased model-index: - name: bert-base-uncased-finetuned-news-1915 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-news-1915 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.5271 | 1.0 | 12485 | 3.3132 | | 3.4436 | 2.0 | 24970 | 3.2593 | | 3.4489 | 3.0 | 37455 | 3.2353 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2
Zoyd
2024-06-04T19:32:49Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "en", "arxiv:2212.04089", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:merge:KatyTheCutie/LemonadeRP-4.5.3", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:merge:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:merge:SanjiWatsuki/Silicon-Maid-7B", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:merge:Sao10K/Fimbulvetr-11B-v2", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-06-04T18:30:09Z
--- license: cc-by-4.0 language: - en base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay model-index: - name: Smart-Lemon-Cookie-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_2bpw_exl2)**</center> | <center>3126 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2)**</center> | <center>4092 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_5bpw_exl2)**</center> | <center>4717 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_75bpw_exl2)**</center> | <center>5029 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2)**</center> | <center>5341 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_25bpw_exl2)**</center> | <center>5653 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-5_0bpw_exl2)**</center> | <center>6589 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_0bpw_exl2)**</center> | <center>7862 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2)**</center> | <center>8467 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2)**</center> | <center>9713 MB</center> | <center>8</center> | ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png) # Chunky-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: float16 name: Mistral-11B --- slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: float16 name: Big-Lemon-Cookie-11B --- models: - model: Big-Lemon-Cookie-11B parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Mistral-11B dtype: float16 name: Chunky-Lemon-Cookie-11B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.23| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |86.55| |MMLU (5-Shot) |65.35| |TruthfulQA (0-shot) |61.59| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |58.45|
Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2
Zoyd
2024-06-04T19:32:40Z
7
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "en", "arxiv:2212.04089", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:merge:KatyTheCutie/LemonadeRP-4.5.3", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:merge:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:merge:SanjiWatsuki/Silicon-Maid-7B", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:merge:Sao10K/Fimbulvetr-11B-v2", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T17:36:40Z
--- license: cc-by-4.0 language: - en base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay model-index: - name: Smart-Lemon-Cookie-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_2bpw_exl2)**</center> | <center>3126 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2)**</center> | <center>4092 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_5bpw_exl2)**</center> | <center>4717 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_75bpw_exl2)**</center> | <center>5029 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2)**</center> | <center>5341 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_25bpw_exl2)**</center> | <center>5653 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-5_0bpw_exl2)**</center> | <center>6589 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_0bpw_exl2)**</center> | <center>7862 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2)**</center> | <center>8467 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2)**</center> | <center>9713 MB</center> | <center>8</center> | ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png) # Chunky-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: float16 name: Mistral-11B --- slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: float16 name: Big-Lemon-Cookie-11B --- models: - model: Big-Lemon-Cookie-11B parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Mistral-11B dtype: float16 name: Chunky-Lemon-Cookie-11B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.23| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |86.55| |MMLU (5-Shot) |65.35| |TruthfulQA (0-shot) |61.59| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |58.45|
Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2
Zoyd
2024-06-04T19:32:13Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "en", "arxiv:2212.04089", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:merge:KatyTheCutie/LemonadeRP-4.5.3", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:merge:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:merge:SanjiWatsuki/Silicon-Maid-7B", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:merge:Sao10K/Fimbulvetr-11B-v2", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-06-04T19:25:27Z
--- license: cc-by-4.0 language: - en base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay model-index: - name: Smart-Lemon-Cookie-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_2bpw_exl2)**</center> | <center>3126 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2)**</center> | <center>4092 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_5bpw_exl2)**</center> | <center>4717 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_75bpw_exl2)**</center> | <center>5029 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2)**</center> | <center>5341 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_25bpw_exl2)**</center> | <center>5653 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-5_0bpw_exl2)**</center> | <center>6589 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_0bpw_exl2)**</center> | <center>7862 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2)**</center> | <center>8467 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2)**</center> | <center>9713 MB</center> | <center>8</center> | ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png) # Chunky-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: float16 name: Mistral-11B --- slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: float16 name: Big-Lemon-Cookie-11B --- models: - model: Big-Lemon-Cookie-11B parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Mistral-11B dtype: float16 name: Chunky-Lemon-Cookie-11B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.23| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |86.55| |MMLU (5-Shot) |65.35| |TruthfulQA (0-shot) |61.59| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |58.45|
Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2
Zoyd
2024-06-04T19:31:34Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "roleplay", "en", "arxiv:2212.04089", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:merge:KatyTheCutie/LemonadeRP-4.5.3", "base_model:SanjiWatsuki/Kunoichi-7B", "base_model:merge:SanjiWatsuki/Kunoichi-7B", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:merge:SanjiWatsuki/Silicon-Maid-7B", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:merge:Sao10K/Fimbulvetr-11B-v2", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "license:cc-by-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-06-04T17:46:22Z
--- license: cc-by-4.0 language: - en base_model: - mistralai/Mistral-7B-v0.1 - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay model-index: - name: Smart-Lemon-Cookie-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B name: Open LLM Leaderboard --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_2bpw_exl2)**</center> | <center>3126 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2)**</center> | <center>4092 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_5bpw_exl2)**</center> | <center>4717 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_75bpw_exl2)**</center> | <center>5029 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2)**</center> | <center>5341 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_25bpw_exl2)**</center> | <center>5653 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-5_0bpw_exl2)**</center> | <center>6589 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_0bpw_exl2)**</center> | <center>7862 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2)**</center> | <center>8467 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2)**</center> | <center>9713 MB</center> | <center>8</center> | ![cute](https://huggingface.co/FallenMerick/Chunky-Lemon-Cookie-11B/resolve/main/Chunky-Lemon-Cookie.png) # Chunky-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-v0.1 layer_range: [8, 32] merge_method: passthrough dtype: float16 name: Mistral-11B --- slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: float16 name: Big-Lemon-Cookie-11B --- models: - model: Big-Lemon-Cookie-11B parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Mistral-11B dtype: float16 name: Chunky-Lemon-Cookie-11B ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B) | Metric |Value| |---------------------------------|----:| |Avg. |70.23| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |86.55| |MMLU (5-Shot) |65.35| |TruthfulQA (0-shot) |61.59| |Winogrande (5-shot) |79.79| |GSM8k (5-shot) |58.45|
andrejikica/paligemma_vqav2
andrejikica
2024-06-04T19:19:03Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
2024-06-02T23:16:41Z
--- license: gemma library_name: peft tags: - generated_from_trainer base_model: google/paligemma-3b-pt-224 model-index: - name: paligemma_vqav2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma_vqav2 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
LucasSantiago257/gemma-2b-4bits-gptq
LucasSantiago257
2024-06-04T19:16:02Z
78
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-06-04T19:09:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Stheno-1.1-L2-13B-GGUF
mradermacher
2024-06-04T19:10:38Z
2
1
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/Stheno-1.1-L2-13B", "base_model:quantized:Sao10K/Stheno-1.1-L2-13B", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-06-04T13:08:04Z
--- base_model: Sao10K/Stheno-1.1-L2-13B language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Sao10K/Stheno-1.1-L2-13B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.1-L2-13B-GGUF/resolve/main/Stheno-1.1-L2-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rong4ivy/mentalhealth_LM
rong4ivy
2024-06-04T19:10:26Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T18:46:10Z
--- license: apache-2.0 --- This large language model is primarily designed to assess the severity of **mental health** issues by analyzing text or speech inputs from users(speakers, writers, patients, etc.). The training dataset consists of diagnoses made by psychiatrists based on the text or speech from patients experiencing various degrees of mental health problems. The model serves multiple purposes. For instance, it can assist doctors in diagnosing mental health conditions in patients/certain individuals, or facilitate self-diagnosis for individuals seeking to understand their own mental health, or analyze the psychological characteristics of characters in fictional narratives. The performace of this model in the test dataset (30477 rows) is as follows: 'accuracy': 0.78, 'f1': 0.77. This model is one part of my project on fine-tuning open-source LLMs to predict various human cognitive abilities (e.g., personality, attitude, mental status etc.). The following test examples can used in the API bar, 1)"I was okay just a moment ago. I will learn how to be okay again.". 2) "There were days when she was unhappy; she did not know why, when it did not seem worthwhile to be glad or sorry, to be alive or dead; when life appeared to her like a grotesque pandemonium and humanity like worms struggling blindly toward inevitable annihilation". 3)"I hope to one day see a sea of people all wearing silver ribbons as a sign that they understand the secret battle and as a celebration of the victories made each day as we individually pull ourselves up out of our foxholes to see our scars heal and to remember what the sun looks like.". The **output** assigns a label with values from **0 to 5** to classify the **severity** of mental health issues. A label of **0** signifies **minimal severity**, suggesting few or no symptoms of mental health problems. Conversely, a label of **5** denotes **maximal severity**, reflecting serious mental health conditions that may require immediate and comprehensive intervention. **A larger value means that the situation is likely to be more serious**. Take care! Please run the following code to test a new text: ``` import torch from transformers import BertTokenizer, BertForSequenceClassification, AutoConfig # Define the model path model_path = "Kevintu/mentalhealth_LM" # Load configuration, tokenizer, and model config = AutoConfig.from_pretrained(model_path, num_labels=6, problem_type="single_label_classification") tokenizer = BertTokenizer.from_pretrained(model_path, use_fast=True) model = BertForSequenceClassification.from_pretrained(model_path, config=config, ignore_mismatched_sizes=True) def predict_text(text, model, tokenizer): # Encode the text using the tokenizer inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512) # Forward pass, get logits with torch.no_grad(): outputs = model(**inputs) # Extract logits logits = outputs.logits # Convert logits to probabilities probabilities = torch.softmax(logits, dim=-1) max_probability, predicted_class_index = torch.max(probabilities, dim=-1) return predicted_class_index.item(), max_probability.item(), probabilities.numpy() # Example usage text = "I was okay just a moment ago. I will learn how to be okay again." predicted_class, max_prob, probs = predict_text(text, model, tokenizer) print(f"Predicted class: {predicted_class}, Probability: {max_prob:.4f}") ##Output: "Predicted class: 2, Probability: 0.5194" ```
fabian-cmu/butterflies-diffusion
fabian-cmu
2024-06-04T19:00:30Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-06-04T18:59:47Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('fabian-cmu/butterflies-diffusion') image = pipeline().images[0] image ```
tsavage68/UTI_M2_1000steps_1e8rate_03beta_CSFTDPO
tsavage68
2024-06-04T18:58:48Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/UTI_M2_1000steps_1e5rate_SFT", "base_model:finetune:tsavage68/UTI_M2_1000steps_1e5rate_SFT", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T18:54:51Z
--- license: apache-2.0 base_model: tsavage68/UTI_M2_1000steps_1e5rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: UTI_M2_1000steps_1e8rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # UTI_M2_1000steps_1e8rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/UTI_M2_1000steps_1e5rate_SFT](https://huggingface.co/tsavage68/UTI_M2_1000steps_1e5rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6733 - Rewards/chosen: 0.0076 - Rewards/rejected: -0.0329 - Rewards/accuracies: 0.8100 - Rewards/margins: 0.0405 - Logps/rejected: -44.2758 - Logps/chosen: -20.2691 - Logits/rejected: -3.8168 - Logits/chosen: -3.7448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6936 | 0.3333 | 25 | 0.6942 | 0.0013 | 0.0032 | 0.4000 | -0.0020 | -44.1553 | -20.2903 | -3.8168 | -3.7448 | | 0.6921 | 0.6667 | 50 | 0.6936 | 0.0023 | 0.0030 | 0.5 | -0.0007 | -44.1562 | -20.2868 | -3.8168 | -3.7448 | | 0.6943 | 1.0 | 75 | 0.6955 | 0.0005 | 0.0049 | 0.4200 | -0.0044 | -44.1498 | -20.2931 | -3.8169 | -3.7449 | | 0.6933 | 1.3333 | 100 | 0.6933 | 0.0014 | 0.0014 | 0.4200 | -0.0001 | -44.1614 | -20.2900 | -3.8168 | -3.7448 | | 0.6886 | 1.6667 | 125 | 0.6920 | 0.0002 | -0.0022 | 0.4800 | 0.0024 | -44.1735 | -20.2938 | -3.8168 | -3.7448 | | 0.6896 | 2.0 | 150 | 0.6887 | 0.0040 | -0.0053 | 0.5700 | 0.0092 | -44.1837 | -20.2814 | -3.8168 | -3.7448 | | 0.6879 | 2.3333 | 175 | 0.6864 | 0.0033 | -0.0104 | 0.6200 | 0.0138 | -44.2009 | -20.2835 | -3.8168 | -3.7447 | | 0.683 | 2.6667 | 200 | 0.6824 | 0.0048 | -0.0171 | 0.6700 | 0.0218 | -44.2230 | -20.2786 | -3.8168 | -3.7447 | | 0.6815 | 3.0 | 225 | 0.6825 | 0.0042 | -0.0174 | 0.7100 | 0.0217 | -44.2242 | -20.2805 | -3.8169 | -3.7449 | | 0.6791 | 3.3333 | 250 | 0.6800 | 0.0039 | -0.0229 | 0.7300 | 0.0268 | -44.2426 | -20.2815 | -3.8168 | -3.7448 | | 0.6772 | 3.6667 | 275 | 0.6798 | 0.0062 | -0.0210 | 0.6900 | 0.0273 | -44.2362 | -20.2738 | -3.8167 | -3.7447 | | 0.6753 | 4.0 | 300 | 0.6784 | 0.0059 | -0.0242 | 0.7400 | 0.0301 | -44.2468 | -20.2747 | -3.8169 | -3.7448 | | 0.6821 | 4.3333 | 325 | 0.6767 | 0.0069 | -0.0268 | 0.7700 | 0.0336 | -44.2554 | -20.2717 | -3.8167 | -3.7447 | | 0.6744 | 4.6667 | 350 | 0.6770 | 0.0060 | -0.0270 | 0.7100 | 0.0330 | -44.2561 | -20.2747 | -3.8169 | -3.7448 | | 0.6741 | 5.0 | 375 | 0.6750 | 0.0088 | -0.0281 | 0.7300 | 0.0370 | -44.2598 | -20.2651 | -3.8168 | -3.7448 | | 0.6738 | 5.3333 | 400 | 0.6753 | 0.0084 | -0.0281 | 0.7700 | 0.0365 | -44.2599 | -20.2667 | -3.8168 | -3.7447 | | 0.6731 | 5.6667 | 425 | 0.6746 | 0.0074 | -0.0306 | 0.75 | 0.0379 | -44.2681 | -20.2701 | -3.8169 | -3.7448 | | 0.6756 | 6.0 | 450 | 0.6755 | 0.0071 | -0.0289 | 0.7700 | 0.0360 | -44.2625 | -20.2710 | -3.8168 | -3.7448 | | 0.6703 | 6.3333 | 475 | 0.6750 | 0.0093 | -0.0279 | 0.75 | 0.0371 | -44.2590 | -20.2637 | -3.8168 | -3.7448 | | 0.6796 | 6.6667 | 500 | 0.6744 | 0.0074 | -0.0308 | 0.7800 | 0.0383 | -44.2689 | -20.2698 | -3.8168 | -3.7448 | | 0.6676 | 7.0 | 525 | 0.6735 | 0.0091 | -0.0311 | 0.7800 | 0.0402 | -44.2699 | -20.2642 | -3.8168 | -3.7447 | | 0.6744 | 7.3333 | 550 | 0.6738 | 0.0067 | -0.0330 | 0.7600 | 0.0397 | -44.2760 | -20.2721 | -3.8168 | -3.7448 | | 0.6725 | 7.6667 | 575 | 0.6729 | 0.0083 | -0.0330 | 0.8000 | 0.0413 | -44.2761 | -20.2668 | -3.8168 | -3.7447 | | 0.6739 | 8.0 | 600 | 0.6732 | 0.0080 | -0.0327 | 0.8100 | 0.0407 | -44.2751 | -20.2679 | -3.8168 | -3.7447 | | 0.6675 | 8.3333 | 625 | 0.6748 | 0.0084 | -0.0291 | 0.7800 | 0.0375 | -44.2632 | -20.2665 | -3.8169 | -3.7448 | | 0.6706 | 8.6667 | 650 | 0.6732 | 0.0087 | -0.0320 | 0.8100 | 0.0407 | -44.2728 | -20.2656 | -3.8168 | -3.7447 | | 0.6718 | 9.0 | 675 | 0.6741 | 0.0086 | -0.0303 | 0.7800 | 0.0389 | -44.2671 | -20.2658 | -3.8168 | -3.7448 | | 0.6715 | 9.3333 | 700 | 0.6743 | 0.0085 | -0.0300 | 0.8000 | 0.0385 | -44.2662 | -20.2662 | -3.8168 | -3.7447 | | 0.6723 | 9.6667 | 725 | 0.6727 | 0.0066 | -0.0352 | 0.7700 | 0.0417 | -44.2834 | -20.2727 | -3.8168 | -3.7448 | | 0.6715 | 10.0 | 750 | 0.6729 | 0.0067 | -0.0348 | 0.7700 | 0.0415 | -44.2822 | -20.2723 | -3.8168 | -3.7448 | | 0.669 | 10.3333 | 775 | 0.6743 | 0.0074 | -0.0310 | 0.7600 | 0.0384 | -44.2694 | -20.2698 | -3.8168 | -3.7447 | | 0.6738 | 10.6667 | 800 | 0.6729 | 0.0079 | -0.0336 | 0.8000 | 0.0415 | -44.2780 | -20.2682 | -3.8168 | -3.7448 | | 0.6738 | 11.0 | 825 | 0.6735 | 0.0088 | -0.0312 | 0.8100 | 0.0400 | -44.2703 | -20.2653 | -3.8169 | -3.7448 | | 0.6682 | 11.3333 | 850 | 0.6736 | 0.0079 | -0.0321 | 0.8100 | 0.0400 | -44.2730 | -20.2681 | -3.8168 | -3.7448 | | 0.6787 | 11.6667 | 875 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | | 0.6771 | 12.0 | 900 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | | 0.6705 | 12.3333 | 925 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | | 0.6727 | 12.6667 | 950 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | | 0.6748 | 13.0 | 975 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | | 0.6809 | 13.3333 | 1000 | 0.6733 | 0.0076 | -0.0329 | 0.8100 | 0.0405 | -44.2758 | -20.2691 | -3.8168 | -3.7448 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.0.0+cu117 - Datasets 2.19.2 - Tokenizers 0.19.1
r1208/Llama-3-Open-Ko-8B-Instruct-preview_4bit_64r
r1208
2024-06-04T18:55:22Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-04T18:24:09Z
--- library_name: transformers tags: - trl - sft --- -- 4 bit # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
r1208/llama_3_ml_beam_all_11_1
r1208
2024-06-04T18:55:06Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-04T17:57:54Z
--- library_name: transformers tags: - trl - sft --- --4bit # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JeswinMS4/scam-alert-distil-roberta
JeswinMS4
2024-06-04T18:51:40Z
24
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T18:16:24Z
--- license: apache-2.0 base_model: distilbert/distilroberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: scam-alert-distil-roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scam-alert-distil-roberta This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1213 - Accuracy: 0.9861 - F1: 0.9860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | No log | 0.1577 | 100 | 0.0852 | 0.9861 | 0.9860 | | No log | 0.3155 | 200 | 0.0690 | 0.9861 | 0.9858 | | No log | 0.4732 | 300 | 0.0965 | 0.9841 | 0.9842 | | No log | 0.6309 | 400 | 0.1015 | 0.9821 | 0.9818 | | No log | 0.7886 | 500 | 0.0629 | 0.9861 | 0.9859 | | No log | 0.9464 | 600 | 0.0788 | 0.9841 | 0.9839 | | No log | 1.1041 | 700 | 0.0500 | 0.9880 | 0.9880 | | No log | 1.2618 | 800 | 0.0778 | 0.9880 | 0.9879 | | No log | 1.4196 | 900 | 0.0657 | 0.9880 | 0.9879 | | No log | 1.5773 | 1000 | 0.1129 | 0.9841 | 0.9837 | | No log | 1.7350 | 1100 | 0.1038 | 0.9880 | 0.9879 | | No log | 1.8927 | 1200 | 0.0861 | 0.9880 | 0.9879 | | No log | 2.0505 | 1300 | 0.1047 | 0.9841 | 0.9841 | | No log | 2.2082 | 1400 | 0.0858 | 0.9900 | 0.9899 | | No log | 2.3659 | 1500 | 0.0936 | 0.9880 | 0.9879 | | No log | 2.5237 | 1600 | 0.0936 | 0.9861 | 0.9859 | | No log | 2.6814 | 1700 | 0.0909 | 0.9861 | 0.9859 | | No log | 2.8391 | 1800 | 0.1143 | 0.9841 | 0.9842 | | No log | 2.9968 | 1900 | 0.0902 | 0.9880 | 0.9881 | | No log | 3.1546 | 2000 | 0.0979 | 0.9841 | 0.9840 | | No log | 3.3123 | 2100 | 0.0977 | 0.9861 | 0.9860 | | No log | 3.4700 | 2200 | 0.0987 | 0.9861 | 0.9860 | | No log | 3.6278 | 2300 | 0.1016 | 0.9861 | 0.9860 | | No log | 3.7855 | 2400 | 0.1170 | 0.9861 | 0.9858 | | No log | 3.9432 | 2500 | 0.1106 | 0.9861 | 0.9859 | | 0.0267 | 4.1009 | 2600 | 0.1202 | 0.9861 | 0.9861 | | 0.0267 | 4.2587 | 2700 | 0.1207 | 0.9841 | 0.9841 | | 0.0267 | 4.4164 | 2800 | 0.1208 | 0.9841 | 0.9841 | | 0.0267 | 4.5741 | 2900 | 0.1215 | 0.9841 | 0.9841 | | 0.0267 | 4.7319 | 3000 | 0.1216 | 0.9841 | 0.9841 | | 0.0267 | 4.8896 | 3100 | 0.1215 | 0.9841 | 0.9841 | | 0.0267 | 5.0473 | 3200 | 0.1350 | 0.9861 | 0.9861 | | 0.0267 | 5.2050 | 3300 | 0.1165 | 0.9880 | 0.9880 | | 0.0267 | 5.3628 | 3400 | 0.1166 | 0.9880 | 0.9880 | | 0.0267 | 5.5205 | 3500 | 0.1167 | 0.9880 | 0.9880 | | 0.0267 | 5.6782 | 3600 | 0.1168 | 0.9880 | 0.9880 | | 0.0267 | 5.8360 | 3700 | 0.1212 | 0.9861 | 0.9860 | | 0.0267 | 5.9937 | 3800 | 0.1213 | 0.9861 | 0.9860 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mradermacher/Stheno-1.2-L2-13B-GGUF
mradermacher
2024-06-04T18:51:32Z
1
1
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/Stheno-1.2-L2-13B", "base_model:quantized:Sao10K/Stheno-1.2-L2-13B", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-06-04T13:38:23Z
--- base_model: Sao10K/Stheno-1.2-L2-13B language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Sao10K/Stheno-1.2-L2-13B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF/resolve/main/Stheno-1.2-L2-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Zephyrus-L1-33B-GGUF
mradermacher
2024-06-04T18:47:37Z
3
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/Zephyrus-L1-33B", "base_model:quantized:Sao10K/Zephyrus-L1-33B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-06-04T14:19:28Z
--- base_model: Sao10K/Zephyrus-L1-33B language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Sao10K/Zephyrus-L1-33B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Zephyrus-L1-33B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_XS.gguf) | IQ3_XS | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_M.gguf) | IQ3_M | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
tyzhu/lmind_nq_train6000_eval6489_v1_reciteonly_qa__home_aiops_zhuty_lm_indexer_data_tyzhu_lmi
tyzhu
2024-06-04T18:46:21Z
0
0
null
[ "generated_from_trainer", "region:us" ]
null
2024-06-04T18:19:16Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: lmind_nq_train6000_eval6489_v1_reciteonly_qa__home_aiops_zhuty_lm_indexer_data_tyzhu_lmi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lmind_nq_train6000_eval6489_v1_reciteonly_qa__home_aiops_zhuty_lm_indexer_data_tyzhu_lmi This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1255 - Accuracy: 0.7637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9058 | 1.0 | 187 | 0.6990 | 0.7896 | | 0.65 | 2.0 | 375 | 0.6702 | 0.7947 | | 0.5668 | 3.0 | 562 | 0.6754 | 0.7936 | | 0.4786 | 4.0 | 750 | 0.7029 | 0.7898 | | 0.4035 | 5.0 | 937 | 0.7543 | 0.7836 | | 0.3288 | 6.0 | 1125 | 0.8310 | 0.7775 | | 0.2672 | 7.0 | 1312 | 0.9077 | 0.7724 | | 0.2107 | 8.0 | 1500 | 0.9633 | 0.7690 | | 0.157 | 9.0 | 1687 | 1.0464 | 0.7666 | | 0.1262 | 9.97 | 1870 | 1.1255 | 0.7637 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.14.1
LucasSantiago257/gemma-2b-2bits-gptq
LucasSantiago257
2024-06-04T18:45:03Z
77
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "2-bit", "gptq", "region:us" ]
text-generation
2024-05-27T17:54:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DimkKozhemyako/dummy-model
DimkKozhemyako
2024-06-04T18:35:55Z
129
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-18T14:26:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
moczard/rl_course_vizdoom_health_gathering_supreme
moczard
2024-06-04T18:34:53Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T18:34:44Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.39 +/- 5.05 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r moczard/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
arnavgrg/phi-2-codealpaca-5K-medusa-lora
arnavgrg
2024-06-04T18:31:03Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-06-04T18:30:56Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
tringuyen-uit/MRC_ER_mdeberta-v3-base_syl_ViWikiFC
tringuyen-uit
2024-06-04T18:30:04Z
22
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "question-answering", "generated_from_trainer", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-05-28T17:12:33Z
--- license: mit base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer metrics: - f1 model-index: - name: MRC_ER_mdeberta-v3-base_syl_ViWikiFC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MRC_ER_mdeberta-v3-base_syl_ViWikiFC This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5168 - Exact Match: 0.7703 - F1: 0.7925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 | |:-------------:|:-----:|:-----:|:---------------:|:-----------:|:------:| | 0.6752 | 1.0 | 4185 | 2.1250 | 0.7421 | 0.7696 | | 0.5728 | 2.0 | 8370 | 1.9436 | 0.7660 | 0.7865 | | 0.4473 | 3.0 | 12555 | 2.1698 | 0.7569 | 0.7796 | | 0.3166 | 4.0 | 16740 | 2.3835 | 0.7708 | 0.7945 | | 0.2525 | 5.0 | 20925 | 2.5168 | 0.7703 | 0.7925 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
James520/Grammar_Check
James520
2024-06-04T18:29:25Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T18:17:45Z
--- tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4740733206272125 f1: 0.8560558021559924 precision: 0.7885514018691588 recall: 0.9361997226074896 auc: 0.8029694782091815 accuracy: 0.7823585810162992
EstebanKora/Voz-Bot
EstebanKora
2024-06-04T18:16:01Z
146
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T18:07:16Z
--- license: apache-2.0 ---
ShenaoZ/SELM-Zephyr-7Bz-iter-1
ShenaoZ
2024-06-04T18:13:59Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/DPO-Zephyr-7B", "base_model:finetune:ShenaoZ/DPO-Zephyr-7B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T17:25:59Z
--- license: mit base_model: ShenaoZ/DPO-Zephyr-7B tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: SELM-Zephyr-7Bz-iter-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SELM-Zephyr-7Bz-iter-1 This model is a fine-tuned version of [ShenaoZ/DPO-Zephyr-7B](https://huggingface.co/ShenaoZ/DPO-Zephyr-7B) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
Firemedic15/dqn-SpaceInvadersNoFrameskip-V4-DE
Firemedic15
2024-06-04T18:12:08Z
3
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T18:11:37Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 270.50 +/- 21.15 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Firemedic15 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Firemedic15 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Firemedic15 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
lhallee/ProteinVec
lhallee
2024-06-04T18:09:49Z
7
1
transformers
[ "transformers", "safetensors", "endpoints_compatible", "region:us" ]
null
2024-04-03T01:54:01Z
--- library_name: transformers tags: [] --- ## THIS PROTEIN VEC ADAPTATION WAS MODIFIED FROM https://github.com/tymor22/protein-vec All credit for the original work goes to Tymor Hamamsy and the following authors of this paper https://www.biorxiv.org/content/10.1101/2023.11.26.568742v1 We have added a Huggingface compatible wrapper for the model in protvec.py Please consider liking the model page and starring the github repo if you are going to use it :) ``` https://huggingface.co/lhallee/ProteinVec https://github.com/lhallee/ProteinVecHuggingface ``` Clone and install ``` git clone https://github.com/lhallee/ProteinVecHuggingface.git pip install torch pytorch_lightning transformers ``` To use from hugggingface ``` from transformers import T5Tokenizer from protvec import ProteinVec, ProteinVecConfig tokenizer = T5Tokenizer.from_pretrained('lhallee/ProteinVec') model = ProteinVec.from_pretrained('lhallee/ProteinVec', config=ProteinVecConfig()) ``` Embed a single sequence with ```embed``` ``` model.to_eval() model = model.cuda() # remove if cpu inference embedding = model.embed('SEQWENCE').detach().cpu() # torch.tensor(1, 512) ``` Use a particular AspectVec by setting the ```inference_mask``` ``` model.aspect_to_keys_dict # dictionary showing the aspects ### The model is set to ALL by default to use full ProteinVec model.inference_mask = model.get_mask('EC') # for Enzyme Comission AspectVec embedding = model.embed(...) ``` To half precision weights ``` model.to_half() ``` ## The license for the protein vec code ### BSD 3-Clause License Copyright (c) 2023, Tymor Hamamsy Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
davelotito/donut_experiment_bayesian_trial_7
davelotito
2024-06-04T18:06:27Z
50
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-04T16:45:42Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer metrics: - bleu - wer model-index: - name: donut_experiment_bayesian_trial_7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_experiment_bayesian_trial_7 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3786 - Bleu: 0.0669 - Precisions: [0.8477801268498943, 0.7836538461538461, 0.7465181058495822, 0.7052980132450332] - Brevity Penalty: 0.0870 - Length Ratio: 0.2905 - Translation Length: 473 - Reference Length: 1628 - Cer: 0.7532 - Wer: 0.8192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.540464175534869e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:| | 0.5323 | 1.0 | 253 | 0.4204 | 0.0580 | [0.7710084033613446, 0.6778042959427207, 0.6132596685082873, 0.5639344262295082] | 0.0889 | 0.2924 | 476 | 1628 | 0.7617 | 0.8431 | | 0.2487 | 2.0 | 506 | 0.3788 | 0.0609 | [0.8123667377398721, 0.7402912621359223, 0.6929577464788732, 0.6476510067114094] | 0.0845 | 0.2881 | 469 | 1628 | 0.7561 | 0.8279 | | 0.1746 | 3.0 | 759 | 0.3551 | 0.0652 | [0.836864406779661, 0.7759036144578313, 0.729050279329609, 0.6843853820598007] | 0.0864 | 0.2899 | 472 | 1628 | 0.7541 | 0.8213 | | 0.1191 | 4.0 | 1012 | 0.3690 | 0.0680 | [0.8547368421052631, 0.784688995215311, 0.7451523545706371, 0.7039473684210527] | 0.0883 | 0.2918 | 475 | 1628 | 0.7514 | 0.8192 | | 0.1072 | 5.0 | 1265 | 0.3786 | 0.0669 | [0.8477801268498943, 0.7836538461538461, 0.7465181058495822, 0.7052980132450332] | 0.0870 | 0.2905 | 473 | 1628 | 0.7532 | 0.8192 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.0 - Datasets 2.18.0 - Tokenizers 0.19.1
mascfree/llama-3-8b-Instruct-bnb-4bit-lora-64-64_grupo_elite_tesis-lora
mascfree
2024-06-04T18:03:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-04T18:03:36Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** mascfree - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
martinsinnona/visdecode_vega_5
martinsinnona
2024-06-04T17:52:11Z
49
0
transformers
[ "transformers", "safetensors", "pix2struct", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-04T17:29:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aymanboufarhi/2B-chat-bot-fstt
aymanboufarhi
2024-06-04T17:51:29Z
142
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T12:23:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shalie/SukoyaKanaPonyXL
Shalie
2024-06-04T17:48:05Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:AstraliteHeart/pony-diffusion-v6", "base_model:adapter:AstraliteHeart/pony-diffusion-v6", "license:other", "region:us" ]
text-to-image
2024-06-04T17:46:01Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya1st, ahoge, mole under eye, x hair ornament, nurse cap, bandaged arm, bandages, white apron, white wrist cuffs, white gloves, nurse dress, blush, hands on own cheeks, hands on own face, looking at viewer, mouth hold, solo, cat cutout, couch, indoors, coy parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06382-2950229766-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya1st, ahoge, mole un.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya5th, mole under eye, x hair ornament, black choker, beret, grey jacket, white dress, open jacket, blush, clenched teeth, crossed arms, hands up, looking at viewer, nose blush, smile, solo, standing, tears, trembling, floating, full body, simple background, white background, determined parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06403-1784443102-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya5th, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya5th, mole under eye, x hair ornament, black choker, beret, grey jacket, white dress, open jacket, closed mouth, holding, holding cup, holding plate, sitting, solo, colored pencil (medium), indoors, painting (medium), slug, traditional media, water, watercolor (medium), wet, relieved parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06401-3811754092-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya5th, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya5th, mole under eye, x hair ornament, black choker, beret, grey jacket, white dress, open jacket, closed mouth, cropped legs, hands in pockets, looking at viewer, solo, standing, autumn, leaf, radiation symbol, star (symbol), tree, white background, wind, depressed parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06399-4103591333-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya5th, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya4rd, mole under eye, x hair ornament, black choker, purple dress, cleavage cutout, heart cutout, maid headdress, purple gloves, corset, wrist cuffs blush, closed mouth, food in mouth, half-closed eyes, looking at viewer, mouth hold, solo, v arms, bird, floral background, flower, personification, rose, striped, happy parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06398-2804258537-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya4rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya4rd, mole under eye, x hair ornament, black choker, purple dress, cleavage cutout, heart cutout, maid headdress, purple gloves, corset, wrist cuffs :d, blush, chibi, looking at viewer, open mouth, smile, solo, standing, flower, stuffed animal, stuffed rabbit, stuffed toy, white flower, hopeful parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06397-1381099730-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya4rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya4rd, mole under eye, x hair ornament, black choker, purple dress, cleavage cutout, heart cutout, maid headdress, purple gloves, corset, wrist cuffs closed mouth, looking away, looking to the side, solo, argyle, argyle background, bat (animal), heart, confident parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06396-1998245858-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya4rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya3rd, mole under eye, x hair ornament, hairclip, school uniform, white shirt, black sailor collar, serafuku, black skirt, glasses, bent over, cellphone, expressionless, holding, holding bowl, holding cup, legs apart, solo, floating hair, from above, full body, reflection, reflective water, sky, star (sky), starry sky, striped, shocked parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06395-3109491790-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya3rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya3rd, mole under eye, x hair ornament, hairclip, school uniform, white shirt, black sailor collar, serafuku, black skirt, glasses, closed mouth, finger to mouth, hand up, looking at viewer, shushing, smile, solo, + +, drinking straw, grey background, milk carton, polka dot background, signature, upper body, thoughtful parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06394-1290925342-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya3rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya3rd, mole under eye, x hair ornament, hairclip, school uniform, white shirt, black sailor collar, serafuku, black skirt, glasses, bathing, solo, day, from side, indoors, photo (object), sleepy parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06393-1706537189-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya3rd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya2nd, mole under eye, x hair ornament, black choker, black nails, hair flower, blue rose, eyepatch, mini top hat, see-through, black dress, gothic lolita, lolita fashion, frills, frilled dress, :/, looking at viewer, solo, blue sky, cloud, scenery, sky, envious parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06391-2435279704-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya2nd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya2nd, mole under eye, x hair ornament, black choker, black nails, hair flower, blue rose, eyepatch, mini top hat, see-through, black dress, gothic lolita, lolita fashion, frills, frilled dress, flower in mouth, holding, holding jewelry, looking away, lying, mouth hold, on back, parted lips, profile, sitting, solo, afloat, blurry, depth of field, food, fruit, grapes, leaf, partially submerged, upside-down, water, wet, hungry parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06390-3121340074-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya2nd, mole under eye.png - text: >- score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, <lora:spsukoyaKanaXLPony-000003:1> sukoya2nd, mole under eye, x hair ornament, black choker, black nails, hair flower, blue rose, eyepatch, mini top hat, see-through, black dress, gothic lolita, lolita fashion, frills, frilled dress, blush, d:, looking at viewer, one eye closed, open mouth, sitting, solo, wariza, wavy mouth, box, gift, gift box, indoors, :q parameters: negative_prompt: 3d, monochrome, greyscale output: url: >- images/06389-2836207797-score_9, score_8_up, score_7_up, uncensored, source_anime, 1girl, _lora_spsukoyaKanaXLPony-000003_1_ sukoya2nd, mole under eye.png base_model: AstraliteHeart/pony-diffusion-v6 instance_prompt: null license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ --- # Sukoya Kana - Nijisanji <Gallery /> ## Model description Sukoya Kana - Nijisanji! Trained on 5 outfits, it has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories. Works well with 0.7-1.0 weight ## Trigger words Debut Outfit: `sukoya1st, ahoge, mole under eye, x hair ornament, nurse cap, bandaged arm, bandages, white apron, white wrist cuffs, white gloves, nurse dress` Second Outfit: `sukoya2nd, mole under eye, x hair ornament, black choker, black nails, hair flower, blue rose, eyepatch, mini top hat, see-through, black dress, gothic lolita, lolita fashion, frills, frilled dress` Third Outfit: `sukoya3rd, mole under eye, x hair ornament, hairclip, school uniform, white shirt, black sailor collar, serafuku, black skirt, glasses` Fourth Outfit: `sukoya4rd, mole under eye, x hair ornament, black choker, purple dress, cleavage cutout, heart cutout, maid headdress, purple gloves, corset, wrist cuffs` Fifth Outfit: `sukoya5th, mole under eye, x hair ornament, black choker, beret, grey jacket, white dress, open jacket` ## Download model Weights for this model are available in Safetensors format. [Download](/Shalie/SukoyaKanaPonyXL/tree/main) them in the Files & versions tab. ### License This LoRA model is provided under the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) license. ## Restrictions: - **Usage in Generation Services**: You are not allowed to use the model in any generation services without proper permission from the original creator. - **Commercial Usage**: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator.
MuhammadRozaq2001/idefics-9b-radVQA-smallEpoch
MuhammadRozaq2001
2024-06-04T17:48:04Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceM4/idefics-9b", "base_model:adapter:HuggingFaceM4/idefics-9b", "region:us" ]
null
2024-06-04T17:08:55Z
--- library_name: peft base_model: HuggingFaceM4/idefics-9b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.2.dev0
malteos/scincl
malteos
2024-06-04T17:45:02Z
20,157
34
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "transformers", "en", "dataset:SciDocs", "dataset:s2orc", "arxiv:2202.06671", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- tags: - feature-extraction - sentence-transformers - transformers library_name: sentence-transformers language: en datasets: - SciDocs - s2orc metrics: - F1 - accuracy - map - ndcg license: mit --- ## SciNCL SciNCL is a pre-trained BERT language model to generate document-level embeddings of research papers. It uses the citation graph neighborhood to generate samples for contrastive learning. Prior to the contrastive training, the model is initialized with weights from [scibert-scivocab-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased). The underlying citation embeddings are trained on the [S2ORC citation graph](https://github.com/allenai/s2orc). Paper: [Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings (EMNLP 2022 paper)](https://arxiv.org/abs/2202.06671). Code: https://github.com/malteos/scincl PubMedNCL: Working with biomedical papers? Try [PubMedNCL](https://huggingface.co/malteos/PubMedNCL). ## How to use the pretrained model ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer("malteos/scincl") # Concatenate the title and abstract with the [SEP] token papers = [ "BERT [SEP] We introduce a new language representation model called BERT", "Attention is all you need [SEP] The dominant sequence transduction models are based on complex recurrent or convolutional neural networks", ] # Inference embeddings = model.encode(papers) # Compute the (cosine) similarity between embeddings similarity = model.similarity(embeddings[0], embeddings[1]) print(similarity.item()) # => 0.8440517783164978 ``` ### Transformers ```python from transformers import AutoTokenizer, AutoModel # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('malteos/scincl') model = AutoModel.from_pretrained('malteos/scincl') papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'}, {'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}] # concatenate title and abstract with [SEP] token title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers] # preprocess the input inputs = tokenizer(title_abs, padding=True, truncation=True, return_tensors="pt", max_length=512) # inference result = model(**inputs) # take the first token ([CLS] token) in the batch as the embedding embeddings = result.last_hidden_state[:, 0, :] # calculate the similarity embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1) similarity = (embeddings[0] @ embeddings[1].T) print(similarity.item()) # => 0.8440518379211426 ``` ## Triplet Mining Parameters | **Setting** | **Value** | |-------------------------|--------------------| | seed | 4 | | triples_per_query | 5 | | easy_positives_count | 5 | | easy_positives_strategy | 5 | | easy_positives_k | 20-25 | | easy_negatives_count | 3 | | easy_negatives_strategy | random_without_knn | | hard_negatives_count | 2 | | hard_negatives_strategy | knn | | hard_negatives_k | 3998-4000 | ## SciDocs Results These model weights are the ones that yielded the best results on SciDocs (`seed=4`). In the paper we report the SciDocs results as mean over ten seeds. | **model** | **mag-f1** | **mesh-f1** | **co-view-map** | **co-view-ndcg** | **co-read-map** | **co-read-ndcg** | **cite-map** | **cite-ndcg** | **cocite-map** | **cocite-ndcg** | **recomm-ndcg** | **recomm-P@1** | **Avg** | |-------------------|-----------:|------------:|----------------:|-----------------:|----------------:|-----------------:|-------------:|--------------:|---------------:|----------------:|----------------:|---------------:|--------:| | Doc2Vec | 66.2 | 69.2 | 67.8 | 82.9 | 64.9 | 81.6 | 65.3 | 82.2 | 67.1 | 83.4 | 51.7 | 16.9 | 66.6 | | fasttext-sum | 78.1 | 84.1 | 76.5 | 87.9 | 75.3 | 87.4 | 74.6 | 88.1 | 77.8 | 89.6 | 52.5 | 18 | 74.1 | | SGC | 76.8 | 82.7 | 77.2 | 88 | 75.7 | 87.5 | 91.6 | 96.2 | 84.1 | 92.5 | 52.7 | 18.2 | 76.9 | | SciBERT | 79.7 | 80.7 | 50.7 | 73.1 | 47.7 | 71.1 | 48.3 | 71.7 | 49.7 | 72.6 | 52.1 | 17.9 | 59.6 | | SPECTER | 82 | 86.4 | 83.6 | 91.5 | 84.5 | 92.4 | 88.3 | 94.9 | 88.1 | 94.8 | 53.9 | 20 | 80 | | SciNCL (10 seeds) | 81.4 | 88.7 | 85.3 | 92.3 | 87.5 | 93.9 | 93.6 | 97.3 | 91.6 | 96.4 | 53.9 | 19.3 | 81.8 | | **SciNCL (seed=4)** | 81.2 | 89.0 | 85.3 | 92.2 | 87.7 | 94.0 | 93.6 | 97.4 | 91.7 | 96.5 | 54.3 | 19.6 | 81.9 | Additional evaluations are available in the paper. ## License MIT
Andhikuys/emotion_recog
Andhikuys
2024-06-04T17:44:03Z
218
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-04T17:43:44Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: emotion_recog results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_recog This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2377 - Accuracy: 0.5375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 10 | 1.6772 | 0.4062 | | No log | 2.0 | 20 | 1.5802 | 0.4437 | | No log | 3.0 | 30 | 1.4877 | 0.4875 | | No log | 4.0 | 40 | 1.4649 | 0.475 | | No log | 5.0 | 50 | 1.4092 | 0.5 | | No log | 6.0 | 60 | 1.3454 | 0.5188 | | No log | 7.0 | 70 | 1.3469 | 0.5312 | | No log | 8.0 | 80 | 1.3010 | 0.5375 | | No log | 9.0 | 90 | 1.2688 | 0.5563 | | No log | 10.0 | 100 | 1.2854 | 0.5563 | | No log | 11.0 | 110 | 1.2516 | 0.5437 | | No log | 12.0 | 120 | 1.2819 | 0.5312 | | No log | 13.0 | 130 | 1.2228 | 0.5875 | | No log | 14.0 | 140 | 1.2250 | 0.5813 | | No log | 15.0 | 150 | 1.2177 | 0.5563 | | No log | 16.0 | 160 | 1.2172 | 0.55 | | No log | 17.0 | 170 | 1.2198 | 0.6 | | No log | 18.0 | 180 | 1.2341 | 0.5563 | | No log | 19.0 | 190 | 1.2206 | 0.6 | | No log | 20.0 | 200 | 1.1635 | 0.5813 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
m1b/2024_06_04_act_reachy2_teleop_remi_aug_60000
m1b
2024-06-04T17:42:51Z
51
0
transformers
[ "transformers", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-06-04T17:42:50Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
Hrishikesh11/llama-3-8b-Instruct-bnb-4bit-medical
Hrishikesh11
2024-06-04T17:41:17Z
3
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:44:08Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** Hrishikesh11 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rajtest/tinyllama-v1
rajtest
2024-06-04T17:33:54Z
107
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:adapter:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-05-30T13:06:08Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - unsloth - generated_from_trainer base_model: unsloth/tinyllama-bnb-4bit model-index: - name: tinyllama-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-v1 This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
dhruvk29/Airavata-Q4_K_M-GGUF
dhruvk29
2024-06-04T17:30:52Z
0
0
null
[ "gguf", "multilingual", "instruction-tuning", "llama2", "llama-cpp", "gguf-my-repo", "en", "hi", "dataset:ai4bharat/indic-instruct-data-v0.1", "base_model:ai4bharat/Airavata", "base_model:quantized:ai4bharat/Airavata", "license:llama2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-06-04T17:30:41Z
--- language: - en - hi license: llama2 tags: - multilingual - instruction-tuning - llama2 - llama-cpp - gguf-my-repo base_model: ai4bharat/Airavata datasets: - ai4bharat/indic-instruct-data-v0.1 model-index: - name: Airavata results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 46.5 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 69.26 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 43.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 68.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 4.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata name: Open LLM Leaderboard --- # dhruvk29/Airavata-Q4_K_M-GGUF This model was converted to GGUF format from [`ai4bharat/Airavata`](https://huggingface.co/ai4bharat/Airavata) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ai4bharat/Airavata) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo dhruvk29/Airavata-Q4_K_M-GGUF --hf-file airavata-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo dhruvk29/Airavata-Q4_K_M-GGUF --hf-file airavata-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo dhruvk29/Airavata-Q4_K_M-GGUF --hf-file airavata-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo dhruvk29/Airavata-Q4_K_M-GGUF --hf-file airavata-q4_k_m.gguf -c 2048 ```
tyzhu/squad_qa_baseline_v5_full_Qwen_Qwen1.5-4B_3e-5_lora
tyzhu
2024-06-04T17:28:13Z
4
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen1.5-4B", "base_model:adapter:Qwen/Qwen1.5-4B", "license:other", "region:us" ]
null
2024-06-04T14:54:16Z
--- license: other base_model: Qwen/Qwen1.5-4B tags: - generated_from_trainer metrics: - accuracy model-index: - name: squad_qa_baseline_v5_full_Qwen_Qwen1.5-4B_3e-5_lora results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad_qa_baseline_v5_full_Qwen_Qwen1.5-4B_3e-5_lora This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.8632 - Accuracy: 0.5660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 50.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9916 | 74 | 2.0550 | 0.5952 | | 2.3403 | 1.9966 | 149 | 2.0411 | 0.5933 | | 2.0198 | 2.9883 | 223 | 2.0403 | 0.5932 | | 2.0198 | 3.9933 | 298 | 2.0647 | 0.5922 | | 1.9239 | 4.9983 | 373 | 2.0999 | 0.5921 | | 1.7309 | 5.9899 | 447 | 2.1973 | 0.5879 | | 1.5254 | 6.9950 | 522 | 2.2753 | 0.5861 | | 1.5254 | 8.0 | 597 | 2.4079 | 0.5819 | | 1.2937 | 8.9916 | 671 | 2.5096 | 0.5775 | | 1.0409 | 9.9966 | 746 | 2.6079 | 0.5739 | | 0.8766 | 10.9883 | 820 | 2.7579 | 0.5718 | | 0.8766 | 11.9933 | 895 | 2.8722 | 0.5688 | | 0.721 | 12.9983 | 970 | 2.9797 | 0.5672 | | 0.6011 | 13.9899 | 1044 | 3.0708 | 0.5662 | | 0.5455 | 14.9950 | 1119 | 3.1660 | 0.5648 | | 0.5455 | 16.0 | 1194 | 3.2479 | 0.5650 | | 0.5003 | 16.9916 | 1268 | 3.2445 | 0.5655 | | 0.4683 | 17.9966 | 1343 | 3.2800 | 0.5638 | | 0.457 | 18.9883 | 1417 | 3.4280 | 0.5640 | | 0.457 | 19.9933 | 1492 | 3.4113 | 0.5662 | | 0.4441 | 20.9983 | 1567 | 3.4731 | 0.5637 | | 0.4327 | 21.9899 | 1641 | 3.5407 | 0.5639 | | 0.4308 | 22.9950 | 1716 | 3.4811 | 0.5640 | | 0.4308 | 24.0 | 1791 | 3.5854 | 0.5642 | | 0.4245 | 24.9916 | 1865 | 3.5206 | 0.5640 | | 0.416 | 25.9966 | 1940 | 3.6091 | 0.5638 | | 0.4173 | 26.9883 | 2014 | 3.5707 | 0.5643 | | 0.4173 | 27.9933 | 2089 | 3.6671 | 0.5648 | | 0.4117 | 28.9983 | 2164 | 3.6267 | 0.5631 | | 0.409 | 29.9899 | 2238 | 3.6658 | 0.5604 | | 0.4085 | 30.9950 | 2313 | 3.6984 | 0.5621 | | 0.4085 | 32.0 | 2388 | 3.6584 | 0.5660 | | 0.403 | 32.9916 | 2462 | 3.5848 | 0.5626 | | 0.404 | 33.9966 | 2537 | 3.6365 | 0.5631 | | 0.4013 | 34.9883 | 2611 | 3.7047 | 0.5647 | | 0.4013 | 35.9933 | 2686 | 3.7735 | 0.5643 | | 0.3987 | 36.9983 | 2761 | 3.6867 | 0.5657 | | 0.3951 | 37.9899 | 2835 | 3.7349 | 0.5662 | | 0.3971 | 38.9950 | 2910 | 3.7173 | 0.5643 | | 0.3971 | 40.0 | 2985 | 3.8004 | 0.5643 | | 0.3939 | 40.9916 | 3059 | 3.8041 | 0.5636 | | 0.3912 | 41.9966 | 3134 | 3.8263 | 0.5648 | | 0.3941 | 42.9883 | 3208 | 3.7954 | 0.5646 | | 0.3941 | 43.9933 | 3283 | 3.8001 | 0.5637 | | 0.3878 | 44.9983 | 3358 | 3.8438 | 0.5634 | | 0.3879 | 45.9899 | 3432 | 3.8626 | 0.5631 | | 0.3907 | 46.9950 | 3507 | 3.7882 | 0.5645 | | 0.3907 | 48.0 | 3582 | 3.8001 | 0.5622 | | 0.3864 | 48.9916 | 3656 | 3.7201 | 0.5609 | | 0.3871 | 49.5812 | 3700 | 3.8632 | 0.5660 | ### Framework versions - PEFT 0.5.0 - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
zzy2524/french_double_negation_model
zzy2524
2024-06-04T17:22:51Z
194
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T16:58:14Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: french_double_negation_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # french_double_negation_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0315 - F1: 1.0 - Roc Auc: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:| | No log | 1.0 | 153 | 0.0539 | 1.0 | 1.0 | 1.0 | | No log | 2.0 | 306 | 0.0315 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Niggendar/bluephoenixponymix_v10
Niggendar
2024-06-04T17:18:10Z
122
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-04T13:35:11Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
coderbojack/google-gemma-2b-1717520428
coderbojack
2024-06-04T17:10:55Z
143
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T17:00:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omikhailovskii/ppo-Huggy
omikhailovskii
2024-06-04T17:05:51Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-06-04T17:04:23Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: omikhailovskii/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
shirleyah/q30_explicit
shirleyah
2024-06-04T17:02:03Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-06-04T16:44:56Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: q30_explicit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # q30_explicit This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
geeknix/llama-simple-7B-4-june-adapters
geeknix
2024-06-04T16:57:35Z
0
0
transformers
[ "transformers", "safetensors", "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "en", "arxiv:2307.09288", "license:llama2", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:14:36Z
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 library_name: transformers --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
0xWe11es/camel-llama2-h256-w1
0xWe11es
2024-06-04T16:56:35Z
77
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:43:16Z
--- license: apache-2.0 ---
flammenai/Mahou-1.2b-mistral-7B-GGUF
flammenai
2024-06-04T16:56:29Z
1
0
transformers
[ "transformers", "gguf", "dataset:flammenai/MahouMix-v1", "dataset:flammenai/FlameMix-DPO-v1", "base_model:flammenai/Mahou-1.2b-mistral-7B", "base_model:quantized:flammenai/Mahou-1.2b-mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-04T14:43:24Z
--- library_name: transformers license: apache-2.0 base_model: - nbeerbower/Mahou-1.2b-mistral-7B datasets: - flammenai/MahouMix-v1 - flammenai/FlameMix-DPO-v1 --- ![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png) # Mahou-1.2b-mistral-7B Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay. ### Chat Format This model has been trained to use ChatML format. ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` ### Roleplay Format - Speech without quotes. - Actions in `*asterisks*` ``` *leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass. ``` ### SillyTavern Settings 1. Use ChatML for the Context Template. 2. Enable Instruct Mode. 3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json). 4. *Recommended* Additonal stopping strings: `["\n", "<|", "</"]` ### Method DPO finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=200, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
hdve/google-gemma-7b-1717519870
hdve
2024-06-04T16:54:03Z
7
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:51:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jingmei/PMC_LLAMA2_7B_trainer_lora
Jingmei
2024-06-04T16:52:07Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:chaoyi-wu/PMC_LLAMA_7B", "base_model:adapter:chaoyi-wu/PMC_LLAMA_7B", "license:apache-2.0", "region:us" ]
null
2024-05-31T20:38:05Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: chaoyi-wu/PMC_LLAMA_7B model-index: - name: PMC_LLAMA2_7B_trainer_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA2_7B_trainer_lora/runs/qm80zll8) # PMC_LLAMA2_7B_trainer_lora This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 123 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 8 - total_train_batch_size: 1152 - total_eval_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.42.0.dev0 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
chreh/math-lora-llama-3-8B
chreh
2024-06-04T16:51:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T20:33:51Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** chreh - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2
Zoyd
2024-06-04T16:49:54Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-06-04T16:36:05Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2
Zoyd
2024-06-04T16:49:04Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T16:45:41Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2
Zoyd
2024-06-04T16:48:58Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-06-04T16:33:34Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2
Zoyd
2024-06-04T16:48:28Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-06-04T16:42:02Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2
Zoyd
2024-06-04T16:48:22Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-06-04T16:39:24Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2
Zoyd
2024-06-04T16:48:01Z
6
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T16:08:34Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> | # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
Likich/tinyllama-finetune-qualcoding_1000_prompt6
Likich
2024-06-04T16:40:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-04T16:40:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Firemedic15/Tsxi-V3
Firemedic15
2024-06-04T16:39:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T16:39:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Tsxi-V3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.68 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Firemedic15/Tsxi-V3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
iloncka/exp_5_new_bg_simple-subs_1_v_5_vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k_ep_60
iloncka
2024-06-04T16:31:24Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-06-04T13:22:12Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Likich/gemmainstruct-finetune-qualcoding_1000_prompt6
Likich
2024-06-04T16:29:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-04T16:29:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MohammadKhosravi/llama-3-8b-Instruct-bnb-4bit-Galilo-v.1.0
MohammadKhosravi
2024-06-04T16:28:30Z
6
0
peft
[ "peft", "tensorboard", "safetensors", "gguf", "llama", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-04T15:54:05Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - unsloth - generated_from_trainer base_model: unsloth/llama-3-8b-Instruct-bnb-4bit model-index: - name: llama-3-8b-Instruct-bnb-4bit-Galilo-v.1.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3-8b-Instruct-bnb-4bit-Galilo-v.1.0 This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 0 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
iloncka/exp_5_objects-subs_1_v_5_vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k_ep_60
iloncka
2024-06-04T16:25:35Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-06-04T12:46:29Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
yuzhe-123/zephyr-7b-sft-full
yuzhe-123
2024-06-04T16:25:05Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-02T09:09:19Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: zephyr-7b-sft-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-sft-full This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 0.9367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9203 | 1.0 | 1090 | 0.9367 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0 - Datasets 2.19.2 - Tokenizers 0.19.1
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0603-epoch7-v4mod4-0.002_fulldata
magnifi
2024-06-04T16:18:01Z
79
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:16:12Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
manbull/Qwen-Qwen1.5-0.5B-1717517619
manbull
2024-06-04T16:14:25Z
143
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T16:13:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RefalMachine/ruadapt_llama3_bpe_extended_part1-2_vo_1e4_no_wd_bs256
RefalMachine
2024-06-04T16:12:28Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-03T18:40:39Z
--- library_name: transformers license: llama3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PaDaS-Lab/Llama3-8B-SPARQL-annotated
PaDaS-Lab
2024-06-04T16:11:03Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T15:54:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iloncka/exp_5_new_bg_simple-subs_1_v_5_eva02_tiny_patch14_224.mim_in22k_ep_60
iloncka
2024-06-04T16:10:54Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-06-03T12:46:48Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Daytona-health-ml/dalai-llama-v1
Daytona-health-ml
2024-06-04T16:08:00Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T13:48:23Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: meta-llama/Llama-2-7b-hf widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
m1b/2024_06_04_act_reachy2_teleop_remi_aug_35K
m1b
2024-06-04T16:07:45Z
51
0
transformers
[ "transformers", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-06-04T16:07:18Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
lalacelik/BirdClef-wav2vec
lalacelik
2024-06-04T16:07:15Z
104
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-31T13:14:43Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base-960h tags: - generated_from_trainer metrics: - accuracy model-index: - name: BirdClef-wav2vec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BirdClef-wav2vec This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the BirdCLEF24 dataset. It achieves the following results on the evaluation set: - Accuracy: 0.0204 - Loss: 4.6406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 4.6298 | 1.0 | 8153 | 0.0204 | 4.6469 | | 4.649 | 2.0 | 16306 | 0.0204 | 4.6439 | | 4.6759 | 3.0 | 24459 | 0.0204 | 4.6406 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
exlleysantos/verball-question-answer
exlleysantos
2024-06-04T16:06:38Z
77
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T11:01:53Z
--- tags: - generated_from_trainer model-index: - name: verball-question-answer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # verball-question-answer This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.15.1
RIyacoool/ssss
RIyacoool
2024-06-04T16:01:33Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-06-04T16:01:33Z
--- license: apache-2.0 ---
Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2
Zoyd
2024-06-04T15:57:02Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-06-04T15:19:33Z
--- base_model: - Nitral-AI/Poppy-1.35-Phase1 - Nitral-AI/Pp-72xra1 library_name: transformers tags: - mergekit - merge license: other language: - en --- **Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_0bpw_exl2)**</center> | <center>6489 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2)**</center> | <center>6909 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2)**</center> | <center>8123 MB</center> | <center>8</center> | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png) # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. # Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30. # : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets). # If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16). ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Pp-72xra1 layer_range: [0, 32] - model: Nitral-AI/Poppy-1.35-Phase1 layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/Pp-72xra1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
bblain/deberta-v3-large-ocean-clf
bblain
2024-06-04T15:56:59Z
107
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-04T15:50:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2
Zoyd
2024-06-04T15:56:02Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T15:41:30Z
--- base_model: - Nitral-AI/Poppy-1.35-Phase1 - Nitral-AI/Pp-72xra1 library_name: transformers tags: - mergekit - merge license: other language: - en --- **Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_0bpw_exl2)**</center> | <center>6489 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2)**</center> | <center>6909 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2)**</center> | <center>8123 MB</center> | <center>8</center> | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png) # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. # Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30. # : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets). # If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16). ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Pp-72xra1 layer_range: [0, 32] - model: Nitral-AI/Poppy-1.35-Phase1 layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/Pp-72xra1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2
Zoyd
2024-06-04T15:55:42Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T14:31:09Z
--- base_model: - Nitral-AI/Poppy-1.35-Phase1 - Nitral-AI/Pp-72xra1 library_name: transformers tags: - mergekit - merge license: other language: - en --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_0bpw_exl2)**</center> | <center>6489 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2)**</center> | <center>6909 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2)**</center> | <center>8123 MB</center> | <center>8</center> | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png) # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. # Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30. # : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets). # If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16). ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Pp-72xra1 layer_range: [0, 32] - model: Nitral-AI/Poppy-1.35-Phase1 layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/Pp-72xra1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
hdve/google-gemma-2b-1717516351
hdve
2024-06-04T15:54:48Z
143
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-04T15:52:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2
Zoyd
2024-06-04T15:54:46Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-04T14:54:44Z
--- base_model: - Nitral-AI/Poppy-1.35-Phase1 - Nitral-AI/Pp-72xra1 library_name: transformers tags: - mergekit - merge license: other language: - en --- **Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_0bpw_exl2)**</center> | <center>6489 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-6_5bpw_exl2)**</center> | <center>6909 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nitral-AI_Poppy_Porpoise-1.4-L3-8B-8_0bpw_exl2)**</center> | <center>8123 MB</center> | <center>8</center> | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png) # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. # Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30. # : [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets). # If you want to use vision functionality: You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). And need to load the specified **mmproj** file: [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16). ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Pp-72xra1 layer_range: [0, 32] - model: Nitral-AI/Poppy-1.35-Phase1 layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/Pp-72xra1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
tanchcliff/openai-whisper-large-v2-LORA-colab
tanchcliff
2024-06-04T15:53:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-04T15:53:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dasayantan/q-Taxi
dasayantan
2024-06-04T15:50:35Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-06-04T15:50:32Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="dasayantan/q-Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```