modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lmstudio-community/OpenCoder-1.5B-Instruct-GGUF
|
lmstudio-community
| 2024-11-11T02:00:32Z | 55 | 0 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"dataset:OpenCoder-LLM/opencoder-sft-stage1",
"dataset:OpenCoder-LLM/opencoder-sft-stage2",
"base_model:infly/OpenCoder-1.5B-Instruct",
"base_model:quantized:infly/OpenCoder-1.5B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-11T01:50:17Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_name: inf
datasets:
- OpenCoder-LLM/opencoder-sft-stage1
- OpenCoder-LLM/opencoder-sft-stage2
language:
- en
- zh
license_link: https://huggingface.co/infly/OpenCoder-1.5B-Instruct/blob/main/LICENSE
base_model: infly/OpenCoder-1.5B-Instruct
license: other
---
## π« Community Model> OpenCoder 1.5B Instruct by Infly
*πΎ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [infly](https://huggingface.co/infly)<br>
**Original model**: [OpenCoder-1.5B-Instruct](https://huggingface.co/infly/OpenCoder-1.5B-Instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b4014](https://github.com/ggerganov/llama.cpp/releases/tag/b4014)<br>
## Technical Details
Supports English and Chinese prompting
Trained on 2.5 trillion tokens, 90% raw code and 10% code-related web data, followed by SFT on 4.5 million high-quality examples
Context length of 8k
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
Triangle104/Llama-3.2-1B-Q8_0-GGUF
|
Triangle104
| 2024-11-11T02:00:26Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-1B",
"base_model:quantized:unsloth/Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T01:59:45Z |
---
base_model: unsloth/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-1B-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-1B`](https://huggingface.co/unsloth/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-1B) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
-
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
-
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-1B-Q8_0-GGUF --hf-file llama-3.2-1b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-1B-Q8_0-GGUF --hf-file llama-3.2-1b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-1B-Q8_0-GGUF --hf-file llama-3.2-1b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-1B-Q8_0-GGUF --hf-file llama-3.2-1b-q8_0.gguf -c 2048
```
|
Triangle104/Llama-3.2-1B-Q6_K-GGUF
|
Triangle104
| 2024-11-11T01:59:46Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-1B",
"base_model:quantized:unsloth/Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T01:59:07Z |
---
base_model: unsloth/Llama-3.2-1B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-1B-Q6_K-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-1B`](https://huggingface.co/unsloth/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-1B) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-1B-Q6_K-GGUF --hf-file llama-3.2-1b-q6_k.gguf -c 2048
```
|
huwhitememes/timwalz-lora
|
huwhitememes
| 2024-11-11T01:59:04Z | 13 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T02:34:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: Tim Walz
widget:
- text: >-
Tim Walz, black rimmed prescription glasses, ruffled business suite, loose
tie, unbuttoned shirt, dirty clothes, down on his luck, sad, drunkard,
wasted, sloppy drunk, neon street light, prostitutes in the background, city
nightlife and crime scenery
output:
url: images/example_szc8vydmv.png
---
# Timwalz Lora
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tim Walz` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('huwhitememes/timwalz-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
lmstudio-community/OpenCoder-8B-Instruct-GGUF
|
lmstudio-community
| 2024-11-11T01:57:54Z | 80 | 1 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"dataset:OpenCoder-LLM/opencoder-sft-stage1",
"dataset:OpenCoder-LLM/opencoder-sft-stage2",
"base_model:infly/OpenCoder-8B-Instruct",
"base_model:quantized:infly/OpenCoder-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-11T01:49:36Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_name: inf
datasets:
- OpenCoder-LLM/opencoder-sft-stage1
- OpenCoder-LLM/opencoder-sft-stage2
language:
- en
- zh
license_link: https://huggingface.co/infly/OpenCoder-8B-Instruct/blob/main/LICENSE
base_model: infly/OpenCoder-8B-Instruct
license: other
---
## π« Community Model> OpenCoder 8B Instruct by Infly
*πΎ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [infly](https://huggingface.co/infly)<br>
**Original model**: [OpenCoder-8B-Instruct](https://huggingface.co/infly/OpenCoder-8B-Instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b4014](https://github.com/ggerganov/llama.cpp/releases/tag/b4014)<br>
## Technical Details
Supports English and Chinese prompting
Trained on 2.5 trillion tokens, 90% raw code and 10% code-related web data, followed by SFT on 4.5 million high-quality examples
Context length of 8k
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
featherless-ai-quants/YenJung-CPE_chatbot-GGUF
|
featherless-ai-quants
| 2024-11-11T01:57:44Z | 15 | 0 | null |
[
"gguf",
"text-generation",
"base_model:YenJung/CPE_chatbot",
"base_model:quantized:YenJung/CPE_chatbot",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-07T06:43:11Z |
---
base_model: YenJung/CPE_chatbot
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# YenJung/CPE_chatbot GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [YenJung-CPE_chatbot-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [YenJung-CPE_chatbot-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [YenJung-CPE_chatbot-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [YenJung-CPE_chatbot-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [YenJung-CPE_chatbot-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [YenJung-CPE_chatbot-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [YenJung-CPE_chatbot-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [YenJung-CPE_chatbot-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [YenJung-CPE_chatbot-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [YenJung-CPE_chatbot-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [YenJung-CPE_chatbot-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/YenJung-CPE_chatbot-GGUF/blob/main/YenJung-CPE_chatbot-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
rawsh/mirrorqwen2.5-0.5b-SimPO-1
|
rawsh
| 2024-11-11T01:55:22Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"cpo",
"unsloth",
"arxiv:2401.08417",
"base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-0",
"base_model:finetune:rawsh/mirrorqwen2.5-0.5b-SimPO-0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T23:45:17Z |
---
base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-0
library_name: transformers
model_name: mirrorqwen2.5-0.5b-SimPO-1
tags:
- generated_from_trainer
- trl
- cpo
- unsloth
licence: license
---
# Model Card for mirrorqwen2.5-0.5b-SimPO-1
This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-0](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/tq03rlku)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF
|
Triangle104
| 2024-11-11T01:50:33Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:50:02Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-1B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF --hf-file llama-3.2-1b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF --hf-file llama-3.2-1b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF --hf-file llama-3.2-1b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q6_K-GGUF --hf-file llama-3.2-1b-instruct-q6_k.gguf -c 2048
```
|
Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF
|
Triangle104
| 2024-11-11T01:49:58Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:49:22Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-1B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-1b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-1b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-1b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-1B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-1b-instruct-q5_k_m.gguf -c 2048
```
|
Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF
|
Triangle104
| 2024-11-11T01:47:07Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:46:12Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q8_0-GGUF --hf-file llama-3.2-3b-instruct-q8_0.gguf -c 2048
```
|
Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF
|
Triangle104
| 2024-11-11T01:45:54Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:45:12Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF --hf-file llama-3.2-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF --hf-file llama-3.2-3b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF --hf-file llama-3.2-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q6_K-GGUF --hf-file llama-3.2-3b-instruct-q6_k.gguf -c 2048
```
|
Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF
|
Triangle104
| 2024-11-11T01:44:44Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:44:06Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_M-GGUF --hf-file llama-3.2-3b-instruct-q5_k_m.gguf -c 2048
```
|
Triangle104/Unsloth-Llama-3.2-3B-Instruct-Q5_K_S-GGUF
|
Triangle104
| 2024-11-11T01:43:28Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:42:40Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q5_K_S-GGUF --hf-file llama-3.2-3b-instruct-q5_k_s.gguf -c 2048
```
|
cachirulo001/theresa
|
cachirulo001
| 2024-11-11T01:42:10Z | 20 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-11T00:18:39Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: th3r3sa
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# theresa
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `th3r3sa` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Triangle104/Unsloth-Llama-3.2-3B-Instruct-Q4_K_M-GGUF
|
Triangle104
| 2024-11-11T01:42:05Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:41:23Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_M-GGUF --hf-file llama-3.2-3b-instruct-q4_k_m.gguf -c 2048
```
|
featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF
|
featherless-ai-quants
| 2024-11-11T01:39:51Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ContextualAI/archangel_sft_llama13b",
"base_model:quantized:ContextualAI/archangel_sft_llama13b",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T01:21:58Z |
---
base_model: ContextualAI/archangel_sft_llama13b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ContextualAI/archangel_sft_llama13b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ContextualAI-archangel_sft_llama13b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-IQ4_XS.gguf) | 6694.33 MB |
| Q2_K | [ContextualAI-archangel_sft_llama13b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [ContextualAI-archangel_sft_llama13b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [ContextualAI-archangel_sft_llama13b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [ContextualAI-archangel_sft_llama13b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q3_K_S.gguf) | 5396.82 MB |
| Q4_K_M | [ContextualAI-archangel_sft_llama13b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [ContextualAI-archangel_sft_llama13b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [ContextualAI-archangel_sft_llama13b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [ContextualAI-archangel_sft_llama13b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [ContextualAI-archangel_sft_llama13b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [ContextualAI-archangel_sft_llama13b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ContextualAI-archangel_sft_llama13b-GGUF/blob/main/ContextualAI-archangel_sft_llama13b-Q8_0.gguf) | 13190.57 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Triangle104/Llama-3.2-3B-Q5_K_M-GGUF
|
Triangle104
| 2024-11-11T01:37:01Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T01:36:18Z |
---
base_model: unsloth/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Q5_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B`](https://huggingface.co/unsloth/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Q5_K_M-GGUF --hf-file llama-3.2-3b-q5_k_m.gguf -c 2048
```
|
Triangle104/Llama-3.2-3B-Q5_K_S-GGUF
|
Triangle104
| 2024-11-11T01:36:03Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T01:34:35Z |
---
base_model: unsloth/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Q5_K_S-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B`](https://huggingface.co/unsloth/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
-
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
-
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q5_K_S-GGUF --hf-file llama-3.2-3b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Q5_K_S-GGUF --hf-file llama-3.2-3b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q5_K_S-GGUF --hf-file llama-3.2-3b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Q5_K_S-GGUF --hf-file llama-3.2-3b-q5_k_s.gguf -c 2048
```
|
Triangle104/Llama-3.2-3B-Q4_K_M-GGUF
|
Triangle104
| 2024-11-11T01:34:09Z | 39 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T01:32:17Z |
---
base_model: unsloth/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B`](https://huggingface.co/unsloth/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -c 2048
```
|
mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF
|
mradermacher
| 2024-11-11T01:33:09Z | 36 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/Qwen2.5-Coder-12.3b-Instruct",
"base_model:quantized:win10/Qwen2.5-Coder-12.3b-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T21:17:05Z |
---
base_model: win10/Qwen2.5-Coder-12.3b-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/win10/Qwen2.5-Coder-12.3b-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q2_K.gguf) | Q2_K | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-12.3b-Instruct-GGUF/resolve/main/Qwen2.5-Coder-12.3b-Instruct.Q8_0.gguf) | Q8_0 | 13.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
huwhitememes/diddy-lora
|
huwhitememes
| 2024-11-11T01:31:52Z | 70 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-11T01:30:43Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/diddy-lora_002880_00_20241003175334.png
text: A photo of Diddy, Diddy, Sean Combs, Puff Daddy, P Diddy, Puffy,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Diddy, Diddy, Sean Combs, Puff Daddy, P Diddy, Puffy,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# diddy-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Diddy, Diddy, Sean Combs, Puff Daddy, P Diddy, Puffy, ` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
juierror/whisper-base-thai
|
juierror
| 2024-11-11T01:31:46Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"th",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-25T16:02:30Z |
---
license: apache-2.0
language:
- th
pipeline_tag: automatic-speech-recognition
---
# Whisper-base Thai finetuned
## 1) Environment Setup
```bash
# visit https://pytorch.org/get-started/locally/ to install pytorch
pip3 install transformers librosa
```
## 2) Usage
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import librosa
device = "cuda" # cpu, cuda
model = WhisperForConditionalGeneration.from_pretrained("juierror/whisper-base-thai").to(device)
processor = WhisperProcessor.from_pretrained("juierror/whisper-base-thai", language="Thai", task="transcribe")
path = "/path/to/audio/file"
def inference(path: str) -> str:
"""
Get the transcription from audio path
Args:
path(str): path to audio file (can be load with librosa)
Returns:
str: transcription
"""
audio, sr = librosa.load(path, sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
generated_tokens = model.generate(
input_features=input_features.to(device),
max_new_tokens=255,
language="Thai"
).cpu()
transcriptions = processor.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
return transcriptions[0]
print(inference(path=path))
```
## 3) Evaluate Result
This model has been trained and evaluated on three datasets:
- Common Voice 13
- The Common Voice dataset has been cleaned and divided into training, testing, and development sets. Care has been taken to ensure that the sentences in each set are unique and do not have any duplicates.
- [Gowajee Corpus](https://github.com/ekapolc/gowajee_corpus)
- The Gowajee dataset has already been pre-split into training, development, and testing sets, allowing for direct utilization.
```
@techreport{gowajee,
title = {{Gowajee Corpus}},
author = {Ekapol Chuangsuwanich and Atiwong Suchato and Korrawe Karunratanakul and Burin Naowarat and Chompakorn CChaichot
and Penpicha Sangsa-nga and Thunyathon Anutarases and Nitchakran Chaipojjana},
year = {2020},
institution = {Chulalongkorn University, Faculty of Engineering, Computer Engineering Department},
month = {12},
Date-Added = {2021-07-20},
url = {https://github.com/ekapolc/gowajee_corpus}
note = {Version 0.9.2}
}
```
- [Thai Elderly Speech](https://github.com/VISAI-DATAWOW/Thai-Elderly-Speech-dataset/releases/tag/v1.0.0)
- As for the Thai Elderly Speech dataset, I performed a random split.
The Character Error Rate (CER) is calculated by removing spaces in both the labels and predicted text, and then computing the CER.
The Word Error Rate (WER) is calculated using the PythaiNLP newmm tokenizer to tokenize both the labels and predicted text, and then computing the WER.
These are the results.
| Dataset | WER | CER |
|-----------------------------------|-------|------|
| Common Voice 13 | 15.89 | 4.32 |
| Gowajee | 19.58 | 9.01 |
| Thai Elderly Speech (Smart Home) | 7.13 | 2.21 |
| Thai Elderly Speech (Health Care) | 6.75 | 1.89 |
|
sbintuitions/sarashina2-8x70b
|
sbintuitions
| 2024-11-11T01:21:56Z | 13 | 32 | null |
[
"safetensors",
"mixtral",
"ja",
"en",
"arxiv:2212.05055",
"license:other",
"region:us"
] | null | 2024-11-05T04:23:39Z |
---
language:
- ja
- en
license: other
license_link: LICENSE
---
# Sarashina2-8x70B
This repository provides large language models trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
## Required Hardware
BF16 Inference:
- 16x H100
- 16x A100 80GB
## Model Description
We constructed this Sarashina2-8x70B model, which consists of over 450 billion parameters, by applying the [sparse upcycling technique](https://arxiv.org/abs/2212.05055) to our [Sarashina2-70B](https://huggingface.co/sbintuitions/sarashina2-70b) model to efficiently build the Mixture-of-Experts model.
We trained the Sarashina2-8x70B model using a mix of Japanese and English corpora from web data.
## Tokenization
We use a [sentencepiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte-fallback.
We do not apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## Ethical Considerations and Limitations
Sarashina2 has not been tuned to follow an instruction yet.
Therefore, sarashina2 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs.
Before using sarashina2, we would like developers to tune models based on human preferences and safety considerations.
## License
[Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina2-8x70B/blob/main/Sarashina%20Model%20NonCommercial%20License%20Agreement)
|
Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF
|
Triangle104
| 2024-11-11T01:21:51Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:20:52Z |
---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.1
model-index:
- name: Phi-3.5-mini-TitanFusion-0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 52.28
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.85
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.8
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.18
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
---
# Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF
This model was converted to GGUF format from [`bunnycore/Phi-3.5-mini-TitanFusion-0.1`](https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1) for more details on the model.
---
Model details:
-
This is a merged pre-trained language model created using the TIES merge method. It is based on the microsoft/Phi-3.5-mini-instruct model and incorporates the knowledge and capabilities of the nbeerbower/phi3.5-gutenberg-4B and ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 models.
Capabilities:
Roleplay: The model can engage in role-playing scenarios, taking on different personas and responding to prompts in a character-appropriate manner.
Creative Writing: It can assist in creative writing tasks, such as brainstorming ideas, generating plotlines, or developing characters.
Reasoning: The model can reason about information and draw conclusions based on the data it has been trained on.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using microsoft/Phi-3.5-mini-instruct as a base.
Models Merged
The following models were included in the merge:
nbeerbower/phi3.5-gutenberg-4B
ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
parameters:
weight: 1
- model: nbeerbower/phi3.5-gutenberg-4B
parameters:
weight: 1
merge_method: ties
base_model: microsoft/Phi-3.5-mini-instruct
parameters:
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q8_0-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q8_0.gguf -c 2048
```
|
mav23/natural-sql-7b-GGUF
|
mav23
| 2024-11-11T01:21:25Z | 40 | 0 |
transformers
|
[
"transformers",
"gguf",
"instruct",
"finetune",
"text-generation",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:quantized:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-11T00:26:43Z |
---
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- instruct
- finetune
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-generation
---
# **Natural-SQL-7B by ChatDB**
## Natural-SQL-7B is a model with very strong performance in Text-to-SQL instructions, has an excellent understanding of complex questions, and outperforms models of the same size in its space.
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/hafdsfrFCqrVbATIzV_EN.png" width="600">
[ChatDB.ai](https://chatdb.ai) | [Notebook](https://github.com/cfahlgren1/natural-sql/blob/main/natural-sql-7b.ipynb) | [Twitter](https://twitter.com/calebfahlgren)
# **Benchmarks**
### *Results on Novel Datasets not trained on via SQL-Eval*
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/5ynfoKPzI3_-WasQQt7qR.png" width="800">
<em>Big thanks to the [defog](https://huggingface.co/defog) team for open sourcing [sql-eval](https://github.com/defog-ai/sql-eval)</em>π
Natural-SQL also can handle complex, compound questions that other models typically struggle with. There is a more detailed writeup Here is a write up, small test done [here](https://chatdb.ai/post/naturalsql-vs-sqlcoder-for-text-to-sql).
# Usage
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### Loading the Model
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("chatdb/natural-sql-7b")
model = AutoModelForCausalLM.from_pretrained(
"chatdb/natural-sql-7b",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **License**
The model weights are licensed under `CC BY-SA 4.0`, with extra guidelines for responsible use expanded from the original model's [Deepseek](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) license.
You're free to use and adapt the model, even commercially.
If you alter the weights, such as through fine-tuning, you must publicly share your changes under the same `CC BY-SA 4.0` license.
### Generating SQL
```python
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
**inputs,
num_return_sequences=1,
eos_token_id=100001,
pad_token_id=100001,
max_new_tokens=400,
do_sample=False,
num_beams=1,
)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs[0].split("```sql")[-1])
```
# Prompt Template
```
# Task
Generate a SQL query to answer the following question: `{natural language question}`
### PostgreSQL Database Schema
The query will run on a database with the following schema:
<SQL Table DDL Statements>
# SQL
Here is the SQL query that answers the question: `{natural language question}`
'''sql
```
# Example SQL Output
### Example Schemas
```sql
CREATE TABLE users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
password_hash TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE projects (
project_id SERIAL PRIMARY KEY,
project_name VARCHAR(100) NOT NULL,
description TEXT,
start_date DATE,
end_date DATE,
owner_id INTEGER REFERENCES users(user_id)
);
CREATE TABLE tasks (
task_id SERIAL PRIMARY KEY,
task_name VARCHAR(100) NOT NULL,
description TEXT,
due_date DATE,
status VARCHAR(50),
project_id INTEGER REFERENCES projects(project_id)
);
CREATE TABLE taskassignments (
assignment_id SERIAL PRIMARY KEY,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id),
assigned_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE comments (
comment_id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id)
);
```
### Example SQL Outputs
**Question**: **Show me the day with the most users joining**
```sql
SELECT created_at::DATE AS day, COUNT(*) AS user_count
FROM users
GROUP BY day
ORDER BY user_count DESC
LIMIT 1;
```
**Question**: **Show me the project that has a task with the most comments**
```sql
SELECT p.project_name, t.task_name, COUNT(c.comment_id) AS comment_count
FROM projects p
JOIN tasks t ON p.project_id = t.project_id
JOIN comments c ON t.task_id = c.task_id
GROUP BY p.project_name, t.task_name
ORDER BY comment_count DESC
LIMIT 1;
```
**Question**: **What is the ratio of users with gmail addresses vs without?**
```sql
SELECT
SUM(CASE WHEN email ILIKE '%@gmail.com%' THEN 1 ELSE 0 END)::FLOAT / NULLIF(SUM(CASE WHEN email NOT ILIKE '%@gmail.com%' THEN 1 ELSE 0 END), 0) AS gmail_ratio
FROM
users;
```
|
Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF
|
Triangle104
| 2024-11-11T01:19:20Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T01:18:17Z |
---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.1
model-index:
- name: Phi-3.5-mini-TitanFusion-0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 52.28
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.85
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.8
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.18
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-3.5-mini-TitanFusion-0.1
name: Open LLM Leaderboard
---
# Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Phi-3.5-mini-TitanFusion-0.1`](https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1) for more details on the model.
---
Model details:
-
This is a merged pre-trained language model created using the TIES merge method. It is based on the microsoft/Phi-3.5-mini-instruct model and incorporates the knowledge and capabilities of the nbeerbower/phi3.5-gutenberg-4B and ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1 models.
Capabilities:
Roleplay: The model can engage in role-playing scenarios, taking on different personas and responding to prompts in a character-appropriate manner.
Creative Writing: It can assist in creative writing tasks, such as brainstorming ideas, generating plotlines, or developing characters.
Reasoning: The model can reason about information and draw conclusions based on the data it has been trained on.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using microsoft/Phi-3.5-mini-instruct as a base.
Models Merged
The following models were included in the merge:
nbeerbower/phi3.5-gutenberg-4B
ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
parameters:
weight: 1
- model: nbeerbower/phi3.5-gutenberg-4B
parameters:
weight: 1
merge_method: ties
base_model: microsoft/Phi-3.5-mini-instruct
parameters:
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-3.5-mini-TitanFusion-0.1-Q5_K_M-GGUF --hf-file phi-3.5-mini-titanfusion-0.1-q5_k_m.gguf -c 2048
```
|
huwhitememes/georgesoros-lora
|
huwhitememes
| 2024-11-11T01:18:16Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-11T01:16:50Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/georgesoros-lora_003712_00_20241012204435.png
text: A photo of George Soros, George Soros, Soros,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of George Soros, George Soros, Soros,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# georgesoros-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of George Soros, George Soros, Soros,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
thiagoads/bitllama-legalpt
|
thiagoads
| 2024-11-11T01:06:04Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T01:05:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huwhitememes/mikebenzcyber-lora
|
huwhitememes
| 2024-11-11T01:03:18Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-11T01:00:23Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/mikebenzcyber-lora_003840_00_20241110160902.png
text: A photo of Mike Benz, Mike Benz Cyber, Mike Benz,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Mike Benz, Mike Benz Cyber, Mike Benz,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# mikebenzcyber-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Mike Benz, Mike Benz Cyber, Mike Benz,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF
|
featherless-ai-quants
| 2024-11-11T01:03:14Z | 34 | 0 | null |
[
"gguf",
"text-generation",
"base_model:abacusai/Giraffe-13b-32k-v3",
"base_model:quantized:abacusai/Giraffe-13b-32k-v3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:42:32Z |
---
base_model: abacusai/Giraffe-13b-32k-v3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# abacusai/Giraffe-13b-32k-v3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [abacusai-Giraffe-13b-32k-v3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-IQ4_XS.gguf) | 6694.33 MB |
| Q2_K | [abacusai-Giraffe-13b-32k-v3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [abacusai-Giraffe-13b-32k-v3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [abacusai-Giraffe-13b-32k-v3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [abacusai-Giraffe-13b-32k-v3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q3_K_S.gguf) | 5396.82 MB |
| Q4_K_M | [abacusai-Giraffe-13b-32k-v3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [abacusai-Giraffe-13b-32k-v3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [abacusai-Giraffe-13b-32k-v3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [abacusai-Giraffe-13b-32k-v3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [abacusai-Giraffe-13b-32k-v3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [abacusai-Giraffe-13b-32k-v3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/abacusai-Giraffe-13b-32k-v3-GGUF/blob/main/abacusai-Giraffe-13b-32k-v3-Q8_0.gguf) | 13190.57 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
NESPED-GEN/Llama-3.2-text2SQL-indentacao
|
NESPED-GEN
| 2024-11-11T01:01:56Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:59:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zekunli/qwen2.5-7b-alpaca-discrim-w-cot-w-cor
|
Zekunli
| 2024-11-11T01:01:54Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:53:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
janjibDEV/BERT-rating-classifier
|
janjibDEV
| 2024-11-11T01:00:14Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-11T00:24:31Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased-finetuned-sst-2-english
tags:
- generated_from_trainer
model-index:
- name: BERT-rating-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-rating-classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9588 | 1.0 | 6250 | 0.9409 |
| 0.8278 | 2.0 | 12500 | 0.9520 |
| 0.7204 | 3.0 | 18750 | 0.9865 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
NESPED-GEN/Llama-3.2-text2SQL-v0
|
NESPED-GEN
| 2024-11-11T00:52:16Z | 142 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:50:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TingChen-ppmc/whisper-small-shanghai-tts-vc-0.25-1.0
|
TingChen-ppmc
| 2024-11-11T00:51:48Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-06T17:17:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/zephyr-7b-sft-full-i1-GGUF
|
mradermacher
| 2024-11-11T00:49:12Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:rasoolfa/zephyr-7b-sft-full",
"base_model:quantized:rasoolfa/zephyr-7b-sft-full",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T18:24:29Z |
---
base_model: rasoolfa/zephyr-7b-sft-full
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rasoolfa/zephyr-7b-sft-full
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/zephyr-7b-sft-full-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-sft-full-i1-GGUF/resolve/main/zephyr-7b-sft-full.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF
|
featherless-ai-quants
| 2024-11-11T00:48:16Z | 10 | 0 | null |
[
"gguf",
"text-generation",
"base_model:yentinglin/Taiwan-LLM-13B-v2.0-chat",
"base_model:quantized:yentinglin/Taiwan-LLM-13B-v2.0-chat",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-08T10:43:32Z |
---
base_model: yentinglin/Taiwan-LLM-13B-v2.0-chat
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# yentinglin/Taiwan-LLM-13B-v2.0-chat GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [yentinglin-Taiwan-LLM-13B-v2.0-chat-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-IQ4_XS.gguf) | 6694.34 MB |
| Q2_K | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q3_K_S.gguf) | 5396.83 MB |
| Q4_K_M | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [yentinglin-Taiwan-LLM-13B-v2.0-chat-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/yentinglin-Taiwan-LLM-13B-v2.0-chat-GGUF/blob/main/yentinglin-Taiwan-LLM-13B-v2.0-chat-Q8_0.gguf) | 13190.58 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF
|
win10
| 2024-11-11T00:42:55Z | 11 | 1 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B",
"Qwen/Qwen2.5-Coder-7B-Instruct",
"llama-cpp",
"gguf-my-repo",
"base_model:win10/Blue-Rose-Coder-12.3B-Instruct",
"base_model:quantized:win10/Blue-Rose-Coder-12.3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T00:41:57Z |
---
base_model: win10/Blue-Rose-Coder-12.3B-Instruct
tags:
- merge
- mergekit
- lazymergekit
- WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
- Qwen/Qwen2.5-Coder-7B-Instruct
- llama-cpp
- gguf-my-repo
---
# win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`win10/Blue-Rose-Coder-12.3B-Instruct`](https://huggingface.co/win10/Blue-Rose-Coder-12.3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/win10/Blue-Rose-Coder-12.3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF --hf-file blue-rose-coder-12.3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF --hf-file blue-rose-coder-12.3b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF --hf-file blue-rose-coder-12.3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo win10/Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF --hf-file blue-rose-coder-12.3b-instruct-q8_0.gguf -c 2048
```
|
NESPED-GEN/TinyLlama-text2SQL-alias
|
NESPED-GEN
| 2024-11-11T00:31:48Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:30:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ImranzamanML/arabert_finetuned_model
|
ImranzamanML
| 2024-11-11T00:27:41Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-11T00:27:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF
|
featherless-ai-quants
| 2024-11-11T00:17:48Z | 17 | 0 | null |
[
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-11T00:05:32Z |
---
base_model: AIGym/Llama-3-8B-Instruct-Gradient-1048k-Agent
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# AIGym/Llama-3-8B-Instruct-Gradient-1048k-Agent GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-GGUF/blob/main/AIGym-Llama-3-8B-Instruct-Gradient-1048k-Agent-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mav23/SecurityLLM-GGUF
|
mav23
| 2024-11-11T00:17:20Z | 90 | 0 |
transformers
|
[
"transformers",
"gguf",
"security",
"cybersecwithai",
"threat",
"vulnerability",
"infosec",
"zysec.ai",
"cyber security",
"ai4security",
"llmsecurity",
"cyber",
"malware analysis",
"exploitdev",
"ai4good",
"aisecurity",
"cybersec",
"cybersecurity",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T23:27:14Z |
---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- threat
- cybersec
- cybersecurity
---
# ZySec-7B
ZySec-7B, stands as a pivotal innovation for security professionals, leveraging the advanced capabilities of HuggingFace's Zephyr language model series. This AI model is crafted to be an omnipresent cybersecurity ally, offering on-demand, expert guidance in cybersecurity issues. Picture ZySec-7B as an ever-present digital teammate, adept at navigating the complexities of security challenges.
The efficacy of ZySec-7B lies in its comprehensive training across numerous cybersecurity fields, providing a deep and wide-ranging understanding of the sector. ZySec is developed using the DPO technique, utilizing a varied dataset encompassing critical topics such as:
- Sophisticated areas like Attack Surface Threats, Cloud Security, and the Cyber Kill Chain.
- Key compliance and regulatory frameworks, including CIS Controls, FedRAMP, PCI DSS, and ISO/IEC 27001.
- Practical aspects like Cloud Secure Migration, Data Exfiltration Techniques, and Security Incident Handling.
- Crucial strategic fields such as Security Governance, Risk Management, and Security Architecture Review.
ZySec-7B's training spans over 30 unique domains, each enriched with thousands of data points, delivering unparalleled expertise.
As the first of its kind in an open-source, AI-driven cybersecurity series, ZySec-7B transcends the conventional role of a support tool, redefining organizational security approaches. Its open-source nature not only invites community contributions but also enhances its flexibility and transparency in managing vast cybersecurity data. ZySec-7B is instrumental in providing vital, actionable insights for strategic decision-making and advanced risk management. More than a mere software, ZySec-7B is a community-enhanced strategic tool, equipping your team to proactively confront and stay ahead of the dynamic landscape of cyber threats and regulatory demands.
# For suggestions please use [Road Map](https://zysec-ai.productlift.dev/t/roadmap)
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/ZySec-7B-dataset-composition.png?download=true" alt="Dataset Distribution" width="90%"/>
Details of dataset distribution here - [Dataset Distribution](https://huggingface.co/aihub-app/ZySec-7B/resolve/main/ZySec-7B-dataset-composition.png?download=true)
Fully compatible with [LM Studio](https://lmstudio.ai). Search for βZysecβ and here is what you get. Here is a sample output of ZySec writing email to John about database security using LM Studio:
<img src="https://huggingface.co/aihub-app/ZySec-7B-v1/resolve/main/sample-output.png" alt="Sample Output" width="90%"/>
---
The training is funded by [ZySec AI](https://www.zysec.app), the mobile app for Cyber Security professionals.
Official GGUF version is hosted here - [ZySec-7B-v1-GGUF on HuggingFace](https://huggingface.co/aihub-app/ZySec-7B-v1-GGUF)
## [ZySec AI: Unleashing the Potential of the ZySec Series Model](https://github.com/ZySec-AI/ZySec)
Project ZySec, an integral part of ZySec AI, stands at the forefront of integrating Artificial Intelligence into Cybersecurity. Centered around the innovative ZySec 7B model, it's designed to revolutionize the cybersecurity landscape with AI-driven solutions. ZySec AI isn't just a tool, it's a transformative approach, blending AI's cutting-edge capabilities with the unique intricacies of cybersecurity, while ensuring privacy and security.
### Discover the Key Features of Project ZySec
- **AI-Driven Cybersecurity:** Tap into the power of the ZySec 7B model, a bespoke AI solution fine-tuned for cybersecurity.
- **24/7 Expert Assistance:** Benefit from round-the-clock support and expert advice, guaranteeing smooth operations during any SOC shift.
- **Efficient Playbook Access:** Streamline your workflow with quick and easy access to playbooks and documents, enhancing information retrieval.
- **Standards Explorer:** Navigate various standards with ease, akin to a seasoned expert's proficiency.
- **Ongoing Internet Research:** Leverage AI-enabled, thorough internet research for exhaustive insights. (Note: Internet use is optional and specific to this feature).
### About Project ZySec by ZySec AI
ZySec AI an opensource project with a vision towards fusioning of Cybersecurity with Artificial Intelligence. Our goal is to transform the way security professionals engage with technology. More than a mere tool, ZySec AI symbolizes a comprehensive strategy to augment security operations, merging the innovative essence of AI with cybersecurity's distinctive challenges, always ensuring privacy and security.
https://github.com/ZySec-AI/ZySec
### The ZySec Roadmap
https://github.com/ZySec-AI/.github/blob/main/roadmap.md
|
NESPED-GEN/TinyLlama-text2SQL-indentacao
|
NESPED-GEN
| 2024-11-11T00:13:21Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:11:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF
|
featherless-ai-quants
| 2024-11-11T00:11:11Z | 9 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Ja-ck/llama-2-13b-DPO-Y24-v2",
"base_model:quantized:Ja-ck/llama-2-13b-DPO-Y24-v2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T23:52:13Z |
---
base_model: Ja-ck/llama-2-13b-DPO-Y24-v2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Ja-ck/llama-2-13b-DPO-Y24-v2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Ja-ck-llama-2-13b-DPO-Y24-v2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-IQ4_XS.gguf) | 6694.33 MB |
| Q2_K | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q3_K_S.gguf) | 5396.82 MB |
| Q4_K_M | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [Ja-ck-llama-2-13b-DPO-Y24-v2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Ja-ck-llama-2-13b-DPO-Y24-v2-GGUF/blob/main/Ja-ck-llama-2-13b-DPO-Y24-v2-Q8_0.gguf) | 13190.57 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Yumeng-Liu/YumengBot
|
Yumeng-Liu
| 2024-11-11T00:05:42Z | 140 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:04:00Z |
---
library_name: transformers
license: mit
language:
- en
base_model:
- microsoft/DialoGPT-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Yumeng Liu
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HZDR-FWGEL/UCD-CLCD256-A2Net
|
HZDR-FWGEL
| 2024-11-11T00:05:23Z | 5 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-11-11T00:05:19Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
NESPED-GEN/TinyLlama-text2SQL-v0
|
NESPED-GEN
| 2024-11-11T00:05:19Z | 139 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-11T00:03:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrFx/wav2vec2-large-xls-r-300m-turkish-colab
|
MrFx
| 2024-11-11T00:03:04Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-10T18:20:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf
|
RichardErkhov
| 2024-11-11T00:02:51Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T21:50:12Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hepu-o4zf-ravz-7-0 - GGUF
- Model creator: https://huggingface.co/abhishek/
- Original model: https://huggingface.co/abhishek/hepu-o4zf-ravz-7-0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hepu-o4zf-ravz-7-0.Q2_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q2_K.gguf) | Q2_K | 2.53GB |
| [hepu-o4zf-ravz-7-0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [hepu-o4zf-ravz-7-0.Q3_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q3_K.gguf) | Q3_K | 3.28GB |
| [hepu-o4zf-ravz-7-0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [hepu-o4zf-ravz-7-0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [hepu-o4zf-ravz-7-0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [hepu-o4zf-ravz-7-0.Q4_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q4_0.gguf) | Q4_0 | 3.83GB |
| [hepu-o4zf-ravz-7-0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [hepu-o4zf-ravz-7-0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [hepu-o4zf-ravz-7-0.Q4_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q4_K.gguf) | Q4_K | 4.07GB |
| [hepu-o4zf-ravz-7-0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [hepu-o4zf-ravz-7-0.Q4_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q4_1.gguf) | Q4_1 | 4.24GB |
| [hepu-o4zf-ravz-7-0.Q5_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q5_0.gguf) | Q5_0 | 4.65GB |
| [hepu-o4zf-ravz-7-0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [hepu-o4zf-ravz-7-0.Q5_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q5_K.gguf) | Q5_K | 4.78GB |
| [hepu-o4zf-ravz-7-0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [hepu-o4zf-ravz-7-0.Q5_1.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q5_1.gguf) | Q5_1 | 5.07GB |
| [hepu-o4zf-ravz-7-0.Q6_K.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q6_K.gguf) | Q6_K | 5.53GB |
| [hepu-o4zf-ravz-7-0.Q8_0.gguf](https://huggingface.co/RichardErkhov/abhishek_-_hepu-o4zf-ravz-7-0-gguf/blob/main/hepu-o4zf-ravz-7-0.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2
|
BenevolenceMessiah
| 2024-11-11T00:02:34Z | 6 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2",
"base_model:merge:BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2",
"base_model:MadeAgents/Hammer2.0-7b",
"base_model:merge:MadeAgents/Hammer2.0-7b",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:merge:Qwen/Qwen2.5-Coder-7B",
"base_model:huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated",
"base_model:merge:huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T23:58:21Z |
---
base_model:
- BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2
- Qwen/Qwen2.5-Coder-7B
- MadeAgents/Hammer2.0-7b
- huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) as a base.
### Models Merged
The following models were included in the merge:
* [BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2)
* [MadeAgents/Hammer2.0-7b](https://huggingface.co/MadeAgents/Hammer2.0-7b)
* [huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.2
models:
- model: huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
parameters:
density: 1.0
weight: 1.0
- model: MadeAgents/Hammer2.0-7b
parameters:
density: 1.0
weight: 1.0
- model: BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-Instruct-TIES-v1.2 # Reflecting Update 11/9/2024
parameters:
density: 1.0
weight: 1.0
merge_method: ties
base_model: Qwen/Qwen2.5-Coder-7B # Reflecting Update 11/9/2024
parameters:
normalize: true
int8_mask: false
dtype: bfloat16
tokenizer_source: union
```
|
mradermacher/RTLCoder-v1.1-GGUF
|
mradermacher
| 2024-11-10T23:50:09Z | 21 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ishorn5/RTLCoder-v1.1",
"base_model:quantized:ishorn5/RTLCoder-v1.1",
"endpoints_compatible",
"region:us"
] | null | 2024-11-09T19:11:43Z |
---
base_model: ishorn5/RTLCoder-v1.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ishorn5/RTLCoder-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RTLCoder-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RTLCoder-v1.1-GGUF/resolve/main/RTLCoder-v1.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TingChen-ppmc/whisper-small-shanghai-tts-vc-0.0-1.0
|
TingChen-ppmc
| 2024-11-10T23:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-05T17:04:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF
|
featherless-ai-quants
| 2024-11-10T23:41:54Z | 6 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Epiculous/Crimson_Dawn-v0.2",
"base_model:quantized:Epiculous/Crimson_Dawn-v0.2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-10T23:26:13Z |
---
base_model: Epiculous/Crimson_Dawn-v0.2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Epiculous/Crimson_Dawn-v0.2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Epiculous-Crimson_Dawn-v0.2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [Epiculous-Crimson_Dawn-v0.2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [Epiculous-Crimson_Dawn-v0.2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [Epiculous-Crimson_Dawn-v0.2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [Epiculous-Crimson_Dawn-v0.2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [Epiculous-Crimson_Dawn-v0.2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [Epiculous-Crimson_Dawn-v0.2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [Epiculous-Crimson_Dawn-v0.2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [Epiculous-Crimson_Dawn-v0.2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [Epiculous-Crimson_Dawn-v0.2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [Epiculous-Crimson_Dawn-v0.2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Epiculous-Crimson_Dawn-v0.2-GGUF/blob/main/Epiculous-Crimson_Dawn-v0.2-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF
|
mradermacher
| 2024-11-10T23:39:14Z | 51 | 1 |
transformers
|
[
"transformers",
"gguf",
"ja",
"en",
"base_model:jaeyong2/Qwen2.5-7B-Instruct-Ja-SFT",
"base_model:quantized:jaeyong2/Qwen2.5-7B-Instruct-Ja-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T21:03:55Z |
---
base_model: jaeyong2/Qwen2.5-7B-Instruct-Ja-SFT
language:
- ja
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaeyong2/Qwen2.5-7B-Instruct-Ja-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-Ja-SFT-GGUF/resolve/main/Qwen2.5-7B-Instruct-Ja-SFT.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
galihmuridan/bert-finetuned-ner
|
galihmuridan
| 2024-11-10T23:26:52Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-10T21:45:20Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2296
- Precision: 0.5054
- Recall: 0.6759
- F1: 0.5783
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.2173 | 0.4481 | 0.6481 | 0.5299 | 0.9389 |
| No log | 2.0 | 498 | 0.2152 | 0.5196 | 0.6543 | 0.5792 | 0.9472 |
| 0.183 | 3.0 | 747 | 0.2296 | 0.5054 | 0.6759 | 0.5783 | 0.9451 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mav23/Maral-7B-alpha-1-GGUF
|
mav23
| 2024-11-10T23:23:50Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"fa",
"dataset:sinarashidi/alpaca-persian",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T22:33:01Z |
---
license: mit
datasets:
- sinarashidi/alpaca-persian
language:
- en
- fa
library_name: transformers
---
# Maral 7B Alpha 1
<p align="center">
<img src="maral-7b-announce.png" width=256 height=256 />
</p>
## What is Maral?
_Maral_ is just a new large lanugage model, specializing on the Persian language. This model is based on [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) and trained an _Alpaca Persian_ dataset. This model is one of the few efforts in Persian speaking scene in order to bring our language to a new life in the era of AI.
Also, since Maral is based on Mistral, it's capable of producing English answers as well.
### What does "Maral" mean?
Maral is the Persian name of [Red Deer](https://en.wikipedia.org/wiki/Red_deer), which is a native species of deers in Iran. The name has chosen for quite a few reasons, one of them is that the environmental concerns we have and second, since it's a Persian LLM, made by Iranian people, it deserves an Iranian name.
## Inference
### Prompt Format
This model requires _Guanaco_ format, which is like this:
```
### Human: <prompt>
### Assistant: <answer>
```
So in your code, you may write prompts like this:
```python
prompt = "Ψ―Ψ± Ψ³Ψ§Ω Ϋ±ΫΉΫΉΫΆ ΪΩ Ϊ©Ψ³Ϋ Ψ±ΫΫΨ³ Ψ¬Ω
ΩΩΨ± Ψ’Ω
Ψ±ΫΪ©Ψ§ Ψ¨ΩΨ―Ψ"
prompt = f"### Human:{prompt}\n### Assistant:"
```
More information about this on the inference sections.
### 4 bit Quantization
If you want to use 4 bit quantization, we have a PEFT for you [here](https://huggingface.co/MaralGPT/MaralGPT-Mistral-7B-v-0-1). Also, you can find _Google Colab_ notebooks [here](https://github.com/prp-e/maralgpt).
### Installing Libraries
```pip install transformers accelerate bitsandbytes```
_NOTE_: `bitsandbytes` library is only needed for 8 bit version. Otherwise, it's not necessary.
### Inference on a big GPU
If you have a big enough GPU like an A100 in your posession, this code is for you.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
model_name_or_id = "MaralGPT/Maral-7B-alpha-1"
model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_id)
prompt = "Ψ―Ψ± Ψ³Ψ§Ω Ϋ±ΫΉΫΉΫΆ ΪΩ Ϊ©Ψ³Ϋ Ψ±ΫΫΨ³ Ψ¬Ω
ΩΩΨ± Ψ’Ω
Ψ±ΫΪ©Ψ§ Ψ¨ΩΨ―Ψ"
prompt = f"### Human:{prompt}\n### Assistant:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.5,
max_new_tokens=300,
pad_token_id=tokenizer.eos_token_id
)
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Inference on a small GPU (Consumer Hardware/Free Colab)
The code is pretty much the same as above, but with a slight diferrence.
* Make sure `bitsandbytes` is installed correctly.
* Your model loading must be `model = AutoModelForCausalLM.from_pretrained(model_name_or_id, load_in_8bit=True, torch_dtype=torch.bfloat16, device_map="auto")`
On _free version_ of Google Colab, you may face RAM problems. I guess using `low_cpu_mem_usage=True` in model loading would help.
## Known Issues
* The model produces GPT-3.5 level answers in terms of grammar (specially Persian) but is capable of extremely insane hallucinations. This problem can be solved by a better dataset and better training procedures (such as DPO).
* According to the previous issue, the model can also generate misinforming answers specially when dealing with _reasoning_ problems in Persian.
* The model is huge, so it requires a lot of resources in order to work correctly. However, we may provide _GPTQ_ or _GGUF_ versions as well.
* The prompt format works and it proves our concept of a _instruct following_ LLM, but since we haven't changed `eos_token` and `bos_token` to our own, you may see unncessary information being generated by the model.
* According to the previous issue, the model is capable of repeating itself. To solve this problem _temporarily_ you have to keep temperature below 1. According to our tests somewhere between 0.5 to 0.7 is a sweet spot.
## Our Team
* Muhammadreza Haghiri ([Website](https://haghiri75.com/en) - [Github](https://github.com/prp-e) - [LinkedIn](https://www.linkedin.com/in/muhammadreza-haghiri-1761325b))
* Mahi Mohrechi ([Website](https://mohrechi-portfolio.vercel.app/) - [Github](https://github.com/f-mohrechi) - [LinkedIn](https://www.linkedin.com/in/faeze-mohrechi/))
## Special Thanks
* Mistral Team for providing the best open source base model ever.
* _Sina Rashidi_, who translated Alpaca dataset to Persian.
* [Jupyto](https://jupyto.com) team for providing our infrastructure.
|
mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF
|
mradermacher
| 2024-11-10T23:20:10Z | 62 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Siheng99/Qwen2.5-7B-Instruct-SEALONG",
"base_model:quantized:Siheng99/Qwen2.5-7B-Instruct-SEALONG",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T20:48:24Z |
---
base_model: Siheng99/Qwen2.5-7B-Instruct-SEALONG
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Siheng99/Qwen2.5-7B-Instruct-SEALONG
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Instruct-SEALONG-GGUF/resolve/main/Qwen2.5-7B-Instruct-SEALONG.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/RuDolph-Hermes-7B-i1-GGUF
|
mradermacher
| 2024-11-10T23:19:12Z | 368 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:theprint/RuDolph-Hermes-7B",
"base_model:quantized:theprint/RuDolph-Hermes-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T20:28:36Z |
---
base_model: theprint/RuDolph-Hermes-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/theprint/RuDolph-Hermes-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/RuDolph-Hermes-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/RuDolph-Hermes-7B-i1-GGUF/resolve/main/RuDolph-Hermes-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/orca_mini_v3_13b-GGUF
|
mradermacher
| 2024-11-10T23:03:18Z | 34 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"psmathur/orca_mini_v3_13b",
"garage-bAInd/Platypus2-13B",
"WizardLM/WizardMath-13B-V1.0",
"en",
"base_model:Aelsharaby/orca_mini_v3_13b",
"base_model:quantized:Aelsharaby/orca_mini_v3_13b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T19:24:43Z |
---
base_model: Aelsharaby/orca_mini_v3_13b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- psmathur/orca_mini_v3_13b
- garage-bAInd/Platypus2-13B
- WizardLM/WizardMath-13B-V1.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aelsharaby/orca_mini_v3_13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/orca_mini_v3_13b-GGUF/resolve/main/orca_mini_v3_13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chelsiksu/marian-finetuned-kde4-en-to-fr
|
chelsiksu
| 2024-11-10T22:50:08Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-11-10T21:59:04Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
tanquangduong/Qwen2.5-0.5B-Instruct-TinyStories
|
tanquangduong
| 2024-11-10T22:40:56Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:finetune:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T22:25:01Z |
---
base_model: unsloth/Qwen2.5-0.5B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** tanquangduong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
|
bartowski
| 2024-11-10T22:39:25Z | 21,803 | 56 | null |
[
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-11-06T19:20:14Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
language:
- en
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
license: apache-2.0
---
## Llamacpp imatrix Quantizations of Qwen2.5-Coder-32B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Qwen2.5-Coder-32B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Qwen2.5-Coder-32B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for most use cases, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Qwen2.5-Coder-32B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. |
| [Qwen2.5-Coder-32B-Instruct-Q4_0_8_8.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_8_8.gguf) | Q4_0_8_8 | 18.64GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_0_4_8.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_4_8.gguf) | Q4_0_4_8 | 18.64GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. |
| [Qwen2.5-Coder-32B-Instruct-Q4_0_4_4.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q4_0_4_4.gguf) | Q4_0_4_4 | 18.64GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. |
| [Qwen2.5-Coder-32B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Qwen2.5-Coder-32B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen2.5-Coder-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
| [Qwen2.5-Coder-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |
| [Qwen2.5-Coder-32B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |
| [Qwen2.5-Coder-32B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen2.5-Coder-32B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Qwen2.5-Coder-32B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 12.84GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Qwen2.5-Coder-32B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |
| [Qwen2.5-Coder-32B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Qwen2.5-Coder-32B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |
| [Qwen2.5-Coder-32B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |
| [Qwen2.5-Coder-32B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/blob/main/Qwen2.5-Coder-32B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 9.03GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen2.5-Coder-32B-Instruct-GGUF --include "Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen2.5-Coder-32B-Instruct-GGUF --include "Qwen2.5-Coder-32B-Instruct-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Qwen2.5-Coder-32B-Instruct-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
oranne55/qualifier-model4-finetune-pretrained-transformer-for-long-inputs
|
oranne55
| 2024-11-10T22:34:23Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-10T20:26:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF
|
mradermacher
| 2024-11-10T22:31:10Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Siheng99/Llama-3.1-8B-Instruct-SEALONG",
"base_model:quantized:Siheng99/Llama-3.1-8B-Instruct-SEALONG",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T20:28:41Z |
---
base_model: Siheng99/Llama-3.1-8B-Instruct-SEALONG
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Siheng99/Llama-3.1-8B-Instruct-SEALONG
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-SEALONG-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-SEALONG.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
maesneako/FR_bkt_maesneako-gpt2-fr_orfeo-cid-paco-cheese_e3
|
maesneako
| 2024-11-10T22:28:39Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3",
"base_model:finetune:maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3",
"region:us"
] | null | 2024-11-10T22:12:06Z |
---
base_model: maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3
tags:
- generated_from_trainer
model-index:
- name: FR_bkt_maesneako-gpt2-fr_orfeo-cid-paco-cheese_e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FR_bkt_maesneako-gpt2-fr_orfeo-cid-paco-cheese_e3
This model is a fine-tuned version of [maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3](https://huggingface.co/maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8414 | 1.46 | 2000 | 3.7684 |
| 3.6555 | 2.91 | 4000 | 3.6330 |
| 3.5683 | 4.37 | 6000 | 3.5716 |
| 3.5228 | 5.82 | 8000 | 3.5445 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.4.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF
|
featherless-ai-quants
| 2024-11-10T22:28:13Z | 22 | 0 | null |
[
"gguf",
"text-generation",
"base_model:nbeerbower/llama3.1-gutenberg-8B",
"base_model:quantized:nbeerbower/llama3.1-gutenberg-8B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-10T22:16:08Z |
---
base_model: nbeerbower/llama3.1-gutenberg-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# nbeerbower/llama3.1-gutenberg-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [nbeerbower-llama3.1-gutenberg-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [nbeerbower-llama3.1-gutenberg-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [nbeerbower-llama3.1-gutenberg-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [nbeerbower-llama3.1-gutenberg-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [nbeerbower-llama3.1-gutenberg-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [nbeerbower-llama3.1-gutenberg-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [nbeerbower-llama3.1-gutenberg-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [nbeerbower-llama3.1-gutenberg-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [nbeerbower-llama3.1-gutenberg-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [nbeerbower-llama3.1-gutenberg-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [nbeerbower-llama3.1-gutenberg-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nbeerbower-llama3.1-gutenberg-8B-GGUF/blob/main/nbeerbower-llama3.1-gutenberg-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF
|
mradermacher
| 2024-11-10T22:16:08Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"hi",
"base_model:jaeyong2/Qwen2.5-3B-Instruct-Id-SFT",
"base_model:quantized:jaeyong2/Qwen2.5-3B-Instruct-Id-SFT",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T20:16:05Z |
---
base_model: jaeyong2/Qwen2.5-3B-Instruct-Id-SFT
language:
- en
- hi
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jaeyong2/Qwen2.5-3B-Instruct-Id-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Id-SFT-GGUF/resolve/main/Qwen2.5-3B-Instruct-Id-SFT.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF
|
mradermacher
| 2024-11-10T22:05:12Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"hi",
"en",
"base_model:jaeyong2/Qwen2.5-3B-Instruct-Hi-SFT",
"base_model:quantized:jaeyong2/Qwen2.5-3B-Instruct-Hi-SFT",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-10T20:18:25Z |
---
base_model: jaeyong2/Qwen2.5-3B-Instruct-Hi-SFT
language:
- hi
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaeyong2/Qwen2.5-3B-Instruct-Hi-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Hi-SFT-i1-GGUF/resolve/main/Qwen2.5-3B-Instruct-Hi-SFT.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
featherless-ai-quants/picAIso-MIX1-GGUF
|
featherless-ai-quants
| 2024-11-10T21:59:22Z | 7 | 0 | null |
[
"gguf",
"text-generation",
"base_model:picAIso/MIX1",
"base_model:quantized:picAIso/MIX1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-10T21:41:04Z |
---
base_model: picAIso/MIX1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# picAIso/MIX1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [picAIso-MIX1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [picAIso-MIX1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [picAIso-MIX1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [picAIso-MIX1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [picAIso-MIX1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [picAIso-MIX1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [picAIso-MIX1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [picAIso-MIX1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [picAIso-MIX1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [picAIso-MIX1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [picAIso-MIX1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/picAIso-MIX1-GGUF/blob/main/picAIso-MIX1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
dcrowleymunster/donalDistiLBERTSunderland6Epoch
|
dcrowleymunster
| 2024-11-10T21:51:52Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-10T01:10:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GitBag/reasoning_rebel_iter_2_1731041913_eta_1e3_lr_3e-7_1731243878
|
GitBag
| 2024-11-10T21:45:54Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T21:40:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thiagoads/llama-legalpt
|
thiagoads
| 2024-11-10T21:38:23Z | 144 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T21:34:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VLKVLK/media-file-recognizer
|
VLKVLK
| 2024-11-10T21:31:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-09T18:25:28Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eriwik/speecht5_finetuned_voxpopuli_nl
|
eriwik
| 2024-11-10T21:16:23Z | 76 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-10T17:40:26Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5187 | 3.8741 | 1000 | 0.4767 |
| 0.4995 | 7.7482 | 2000 | 0.4606 |
| 0.4944 | 11.6223 | 3000 | 0.4528 |
| 0.4882 | 15.4964 | 4000 | 0.4512 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
akshitha-k/all-MiniLM-L6-v2-stsb
|
akshitha-k
| 2024-11-10T21:14:29Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-10T21:14:22Z |
---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: A girl is styling her hair.
sentences:
- China's online population rises to 618 mln
- A girl is filing her nails.
- A woman is slicing a pepper.
- source_sentence: Australian among four on plane missing in Indonesia
sentences:
- Woman dies in Co Cork house fire
- '''No plans'' to resettle Syrian refugees in the UK'
- Iranian painter Mansoureh Hosseini dies
- source_sentence: West hails Syria opposition vote to join peace talks
sentences:
- Asteroid passes Earth in fly-by
- GlaxoSmithKline, the UK drugmaker, has said it would cut off supplies to Canadian
stores shipping drugs to the US.
- Syrian opposition to name delegation for talks
- source_sentence: Obama signs up for Obamacare
sentences:
- Americans scramble to sign up for Obamacare by deadline
- A girl wearing a red blouse riding a brown horse.
- The study also found that skin cancer nearly tripled in Norway and Sweden since
the 1950s.
- source_sentence: A clear plastic chair in front of a bookcase.
sentences:
- A woman with a white horse.
- a clear plastic chair in front of book shelves.
- A herd of caribou are crossing a road.
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("akshitha-k/all-MiniLM-L6-v2-stsb")
# Run inference
sentences = [
'A clear plastic chair in front of a bookcase.',
'a clear plastic chair in front of book shelves.',
'A woman with a white horse.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,749 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.34 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.31 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------------------------------------|:--------------------------------------------------------------|:-----------------|
| <code>U.N. rights chief presses Egypt on Mursi detention</code> | <code>UN Rights Chief Presses Egypt on Morsi Detention</code> | <code>1.0</code> |
| <code>Someone is slicing an onion.</code> | <code>Someoen is peeling a potato.</code> | <code>0.2</code> |
| <code>A young boy in a white dress shirt is playing on a grassy plain.</code> | <code>A woman is getting her hair done at a salon.</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 20
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-------:|:----:|:-------------:|
| 1.3889 | 500 | 0.0295 |
| 2.7778 | 1000 | 0.0242 |
| 4.1667 | 1500 | 0.0218 |
| 5.5556 | 2000 | 0.0198 |
| 6.9444 | 2500 | 0.0175 |
| 8.3333 | 3000 | 0.0157 |
| 9.7222 | 3500 | 0.0135 |
| 11.1111 | 4000 | 0.0119 |
| 12.5 | 4500 | 0.0104 |
| 13.8889 | 5000 | 0.0088 |
| 15.2778 | 5500 | 0.0074 |
| 16.6667 | 6000 | 0.0063 |
| 18.0556 | 6500 | 0.0056 |
| 19.4444 | 7000 | 0.0049 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Stable-X/yoso-normal-v1-5
|
Stable-X
| 2024-11-10T21:05:02Z | 4,617 | 2 |
diffusers
|
[
"diffusers",
"image-to-image",
"license:apache-2.0",
"diffusers:YOSONormalsPipeline",
"region:us"
] |
image-to-image
| 2024-11-07T22:30:16Z |
---
library_name: diffusers
pipeline_tag: image-to-image
license: apache-2.0
---
# Model Card for StableNormal
This repository contains the weights of StableNormal: Reducing Diffusion Variance for Stable and Sharp Normal
## Usage
See the Github repository: https://github.com/Stable-X/StableNormal regarding installation instructions.
The model can then be used as follows:
```python
import torch
from PIL import Image
# Load an image
input_image = Image.open("path/to/your/image.jpg")
# Create predictor instance
predictor = torch.hub.load("hugoycj/StableNormal", "StableNormal_turbo", trust_repo=True, yoso_version='yoso-normal-v1-5')
# Generate normal map using alpha channel for masking
normal_map = predictor(rgba_image, data_type="object") # Will mask out background, if alpha channel is avalible, else use birefnet
normal_map = predictor(rgba_image, data_type="outdoor") # Will use Mask2Former to mask out sky and plants
normal_map = predictor(rgba_image, data_type="indoor") # Will not mask out
# Apply the model to the image
normal_image = predictor(input_image)
# Save or display the result
normal_image.save("output/normal_map.png")
```
|
JuniperChinenye/c1
|
JuniperChinenye
| 2024-11-10T21:02:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T20:59:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Hermes-Instruct-7B-217K-GGUF
|
mradermacher
| 2024-11-10T20:48:24Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:lodrick-the-lafted/Hermes-217K",
"base_model:lodrick-the-lafted/Hermes-Instruct-7B-217K",
"base_model:quantized:lodrick-the-lafted/Hermes-Instruct-7B-217K",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-08T12:58:03Z |
---
base_model: lodrick-the-lafted/Hermes-Instruct-7B-217K
datasets:
- lodrick-the-lafted/Hermes-217K
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hermes-Instruct-7B-217K-GGUF/resolve/main/Hermes-Instruct-7B-217K.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF
|
mradermacher
| 2024-11-10T20:39:13Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"cpo",
"unsloth",
"en",
"base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-0",
"base_model:quantized:rawsh/mirrorqwen2.5-0.5b-SimPO-0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T20:36:45Z |
---
base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-0
language:
- en
library_name: transformers
model_name: mirrorqwen2.5-0.5b-SimPO-0
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- cpo
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mirrorqwen2.5-0.5b-SimPO-0-GGUF/resolve/main/mirrorqwen2.5-0.5b-SimPO-0.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/tinyllama-colorist-v0-GGUF
|
mradermacher
| 2024-11-10T20:38:21Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:tmickleydoyle/tinyllama-colorist-v0",
"base_model:quantized:tmickleydoyle/tinyllama-colorist-v0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-10T20:35:53Z |
---
base_model: tmickleydoyle/tinyllama-colorist-v0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tmickleydoyle/tinyllama-colorist-v0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/tinyllama-colorist-v0-GGUF/resolve/main/tinyllama-colorist-v0.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
youssef14582/t5-small-finetuned-xsum
|
youssef14582
| 2024-11-10T20:36:13Z | 122 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-10T18:02:49Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.4606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5400
- Rouge1: 27.4606
- Rouge2: 7.3882
- Rougel: 21.5683
- Rougelsum: 21.5769
- Gen Len: 18.8013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.8393 | 1.0 | 2500 | 2.5833 | 26.7701 | 6.8545 | 20.9017 | 20.9024 | 18.8193 |
| 2.7625 | 2.0 | 5000 | 2.5494 | 27.2012 | 7.1774 | 21.2519 | 21.2529 | 18.8019 |
| 2.7673 | 3.0 | 7500 | 2.5400 | 27.4606 | 7.3882 | 21.5683 | 21.5769 | 18.8013 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF
|
featherless-ai-quants
| 2024-11-10T20:33:10Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"base_model:GalrionSoftworks/MN-LooseCannon-12B-v1",
"base_model:quantized:GalrionSoftworks/MN-LooseCannon-12B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-10T20:14:13Z |
---
base_model: GalrionSoftworks/MN-LooseCannon-12B-v1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# GalrionSoftworks/MN-LooseCannon-12B-v1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [GalrionSoftworks-MN-LooseCannon-12B-v1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [GalrionSoftworks-MN-LooseCannon-12B-v1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/GalrionSoftworks-MN-LooseCannon-12B-v1-GGUF/blob/main/GalrionSoftworks-MN-LooseCannon-12B-v1-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
thdangtr/blip_title_v1.0_e2_p3
|
thdangtr
| 2024-11-10T20:32:48Z | 64 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-10T20:31:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF
|
featherless-ai-quants
| 2024-11-10T20:26:23Z | 16 | 0 | null |
[
"gguf",
"text-generation",
"base_model:devhyun88/ku-mistral-7b-PGO-v2",
"base_model:quantized:devhyun88/ku-mistral-7b-PGO-v2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-10T20:15:14Z |
---
base_model: devhyun88/ku-mistral-7b-PGO-v2
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# devhyun88/ku-mistral-7b-PGO-v2 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [devhyun88-ku-mistral-7b-PGO-v2-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [devhyun88-ku-mistral-7b-PGO-v2-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [devhyun88-ku-mistral-7b-PGO-v2-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [devhyun88-ku-mistral-7b-PGO-v2-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [devhyun88-ku-mistral-7b-PGO-v2-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [devhyun88-ku-mistral-7b-PGO-v2-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [devhyun88-ku-mistral-7b-PGO-v2-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [devhyun88-ku-mistral-7b-PGO-v2-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [devhyun88-ku-mistral-7b-PGO-v2-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [devhyun88-ku-mistral-7b-PGO-v2-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q6_K.gguf) | 5666.79 MB |
| Q8_0 | [devhyun88-ku-mistral-7b-PGO-v2-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/devhyun88-ku-mistral-7b-PGO-v2-GGUF/blob/main/devhyun88-ku-mistral-7b-PGO-v2-Q8_0.gguf) | 7339.34 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
sagarxr/llava_next_fir_vqa
|
sagarxr
| 2024-11-10T20:25:59Z | 10 | 0 | null |
[
"safetensors",
"llava_next",
"llama-factory",
"license:mit",
"region:us"
] | null | 2024-11-10T20:11:12Z |
---
license: mit
tags:
- llama-factory
---
|
featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF
|
featherless-ai-quants
| 2024-11-10T20:25:41Z | 17 | 0 | null |
[
"gguf",
"text-generation",
"base_model:CardinalOperations/ORLM-LLaMA-3-8B",
"base_model:quantized:CardinalOperations/ORLM-LLaMA-3-8B",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-05T07:10:53Z |
---
base_model: CardinalOperations/ORLM-LLaMA-3-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# CardinalOperations/ORLM-LLaMA-3-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [CardinalOperations-ORLM-LLaMA-3-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [CardinalOperations-ORLM-LLaMA-3-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [CardinalOperations-ORLM-LLaMA-3-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [CardinalOperations-ORLM-LLaMA-3-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [CardinalOperations-ORLM-LLaMA-3-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [CardinalOperations-ORLM-LLaMA-3-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [CardinalOperations-ORLM-LLaMA-3-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [CardinalOperations-ORLM-LLaMA-3-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/CardinalOperations-ORLM-LLaMA-3-8B-GGUF/blob/main/CardinalOperations-ORLM-LLaMA-3-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/Codex-148M-GGUF
|
mradermacher
| 2024-11-10T20:24:25Z | 130 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T20:21:38Z |
---
base_model: khairi/Codex-148M
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/khairi/Codex-148M
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Codex-148M-GGUF/resolve/main/Codex-148M.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF
|
featherless-ai-quants
| 2024-11-10T20:17:06Z | 26 | 1 | null |
[
"gguf",
"text-generation",
"base_model:spow12/ChatWaifu_v1.4",
"base_model:quantized:spow12/ChatWaifu_v1.4",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-10T20:02:27Z |
---
base_model: spow12/ChatWaifu_v1.4
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# spow12/ChatWaifu_v1.4 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [spow12-ChatWaifu_v1.4-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-IQ4_XS.gguf) | 6485.04 MB |
| Q2_K | [spow12-ChatWaifu_v1.4-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q2_K.gguf) | 4569.10 MB |
| Q3_K_L | [spow12-ChatWaifu_v1.4-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q3_K_L.gguf) | 6257.54 MB |
| Q3_K_M | [spow12-ChatWaifu_v1.4-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q3_K_M.gguf) | 5801.29 MB |
| Q3_K_S | [spow12-ChatWaifu_v1.4-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q3_K_S.gguf) | 5277.85 MB |
| Q4_K_M | [spow12-ChatWaifu_v1.4-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q4_K_M.gguf) | 7130.82 MB |
| Q4_K_S | [spow12-ChatWaifu_v1.4-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q4_K_S.gguf) | 6790.35 MB |
| Q5_K_M | [spow12-ChatWaifu_v1.4-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q5_K_M.gguf) | 8323.32 MB |
| Q5_K_S | [spow12-ChatWaifu_v1.4-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q5_K_S.gguf) | 8124.10 MB |
| Q6_K | [spow12-ChatWaifu_v1.4-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q6_K.gguf) | 9590.35 MB |
| Q8_0 | [spow12-ChatWaifu_v1.4-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/spow12-ChatWaifu_v1.4-GGUF/blob/main/spow12-ChatWaifu_v1.4-Q8_0.gguf) | 12419.10 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/piccolo-8x7b-GGUF
|
mradermacher
| 2024-11-10T20:09:46Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:macadeliccc/piccolo-8x7b",
"base_model:quantized:macadeliccc/piccolo-8x7b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-10T10:06:28Z |
---
base_model: macadeliccc/piccolo-8x7b
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/macadeliccc/piccolo-8x7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/piccolo-8x7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/piccolo-8x7b-GGUF/resolve/main/piccolo-8x7b.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hyt1912/distilbert-base-uncased-finetuned-squad
|
hyt1912
| 2024-11-10T20:09:25Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-11-10T17:07:21Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2043 | 1.0 | 5533 | 1.1691 |
| 0.9425 | 2.0 | 11066 | 1.1025 |
| 0.7578 | 3.0 | 16599 | 1.1547 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF
|
featherless-ai-quants
| 2024-11-10T19:57:30Z | 8 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B",
"base_model:quantized:Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T11:26:45Z |
---
base_model: Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-IQ4_XS.gguf) | 4276.63 MB |
| Q2_K | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q5_K_M.gguf) | 5467.41 MB |
| Q5_K_S | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q5_K_S.gguf) | 5339.91 MB |
| Q6_K | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q6_K.gguf) | 6290.45 MB |
| Q8_0 | [Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-GGUF/blob/main/Saxo-Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF
|
featherless-ai-quants
| 2024-11-10T19:57:25Z | 16 | 0 | null |
[
"gguf",
"text-generation",
"base_model:THUDM/LongWriter-llama3.1-8b",
"base_model:quantized:THUDM/LongWriter-llama3.1-8b",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-09T11:23:22Z |
---
base_model: THUDM/LongWriter-llama3.1-8b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# THUDM/LongWriter-llama3.1-8b GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [THUDM-LongWriter-llama3.1-8b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [THUDM-LongWriter-llama3.1-8b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [THUDM-LongWriter-llama3.1-8b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [THUDM-LongWriter-llama3.1-8b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [THUDM-LongWriter-llama3.1-8b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [THUDM-LongWriter-llama3.1-8b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [THUDM-LongWriter-llama3.1-8b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [THUDM-LongWriter-llama3.1-8b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [THUDM-LongWriter-llama3.1-8b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [THUDM-LongWriter-llama3.1-8b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [THUDM-LongWriter-llama3.1-8b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/THUDM-LongWriter-llama3.1-8b-GGUF/blob/main/THUDM-LongWriter-llama3.1-8b-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF
|
featherless-ai-quants
| 2024-11-10T19:57:23Z | 20 | 1 | null |
[
"gguf",
"text-generation",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T11:21:05Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# unsloth/Meta-Llama-3.1-8B-Instruct GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [unsloth-Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf) | 4276.63 MB |
| Q2_K | [unsloth-Meta-Llama-3.1-8B-Instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [unsloth-Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [unsloth-Meta-Llama-3.1-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [unsloth-Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf) | 5467.41 MB |
| Q5_K_S | [unsloth-Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf) | 5339.91 MB |
| Q6_K | [unsloth-Meta-Llama-3.1-8B-Instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q6_K.gguf) | 6290.45 MB |
| Q8_0 | [unsloth-Meta-Llama-3.1-8B-Instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/unsloth-Meta-Llama-3.1-8B-Instruct-GGUF/blob/main/unsloth-Meta-Llama-3.1-8B-Instruct-Q8_0.gguf) | 8145.12 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF
|
featherless-ai-quants
| 2024-11-10T19:57:17Z | 27 | 0 | null |
[
"gguf",
"text-generation",
"base_model:umarigan/llama-3.1-openhermes-tr",
"base_model:quantized:umarigan/llama-3.1-openhermes-tr",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-09T11:10:24Z |
---
base_model: umarigan/llama-3.1-openhermes-tr
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# umarigan/llama-3.1-openhermes-tr GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [umarigan-llama-3.1-openhermes-tr-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [umarigan-llama-3.1-openhermes-tr-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [umarigan-llama-3.1-openhermes-tr-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [umarigan-llama-3.1-openhermes-tr-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [umarigan-llama-3.1-openhermes-tr-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [umarigan-llama-3.1-openhermes-tr-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [umarigan-llama-3.1-openhermes-tr-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [umarigan-llama-3.1-openhermes-tr-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [umarigan-llama-3.1-openhermes-tr-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [umarigan-llama-3.1-openhermes-tr-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [umarigan-llama-3.1-openhermes-tr-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/umarigan-llama-3.1-openhermes-tr-GGUF/blob/main/umarigan-llama-3.1-openhermes-tr-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF
|
featherless-ai-quants
| 2024-11-10T19:57:00Z | 13 | 0 | null |
[
"gguf",
"text-generation",
"base_model:proxectonos/Llama-3.1-Carballo",
"base_model:quantized:proxectonos/Llama-3.1-Carballo",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-09T10:09:54Z |
---
base_model: proxectonos/Llama-3.1-Carballo
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# proxectonos/Llama-3.1-Carballo GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [proxectonos-Llama-3.1-Carballo-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [proxectonos-Llama-3.1-Carballo-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [proxectonos-Llama-3.1-Carballo-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [proxectonos-Llama-3.1-Carballo-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [proxectonos-Llama-3.1-Carballo-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [proxectonos-Llama-3.1-Carballo-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [proxectonos-Llama-3.1-Carballo-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [proxectonos-Llama-3.1-Carballo-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [proxectonos-Llama-3.1-Carballo-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [proxectonos-Llama-3.1-Carballo-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [proxectonos-Llama-3.1-Carballo-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/proxectonos-Llama-3.1-Carballo-GGUF/blob/main/proxectonos-Llama-3.1-Carballo-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF
|
featherless-ai-quants
| 2024-11-10T19:56:52Z | 10 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R",
"base_model:quantized:Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T05:38:12Z |
---
base_model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF/blob/main/Salesforce-LLaMA-3-8B-SFR-Iterative-DPO-R-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF
|
featherless-ai-quants
| 2024-11-10T19:56:44Z | 13 | 0 | null |
[
"gguf",
"text-generation",
"base_model:picAIso/TARS-8B-llama-REMIX",
"base_model:quantized:picAIso/TARS-8B-llama-REMIX",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T05:32:40Z |
---
base_model: picAIso/TARS-8B-llama-REMIX
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# picAIso/TARS-8B-llama-REMIX GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [picAIso-TARS-8B-llama-REMIX-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [picAIso-TARS-8B-llama-REMIX-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [picAIso-TARS-8B-llama-REMIX-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [picAIso-TARS-8B-llama-REMIX-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [picAIso-TARS-8B-llama-REMIX-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [picAIso-TARS-8B-llama-REMIX-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [picAIso-TARS-8B-llama-REMIX-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [picAIso-TARS-8B-llama-REMIX-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [picAIso-TARS-8B-llama-REMIX-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [picAIso-TARS-8B-llama-REMIX-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [picAIso-TARS-8B-llama-REMIX-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/picAIso-TARS-8B-llama-REMIX-GGUF/blob/main/picAIso-TARS-8B-llama-REMIX-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF
|
featherless-ai-quants
| 2024-11-10T19:56:40Z | 10 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Locutusque/Apollo-0.4-Llama-3.1-8B",
"base_model:quantized:Locutusque/Apollo-0.4-Llama-3.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T05:13:56Z |
---
base_model: Locutusque/Apollo-0.4-Llama-3.1-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Apollo-0.4-Llama-3.1-8B GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Apollo-0.4-Llama-3.1-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [Locutusque-Apollo-0.4-Llama-3.1-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Apollo-0.4-Llama-3.1-8B-GGUF/blob/main/Locutusque-Apollo-0.4-Llama-3.1-8B-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF
|
featherless-ai-quants
| 2024-11-10T19:56:35Z | 7 | 0 | null |
[
"gguf",
"text-generation",
"base_model:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1",
"base_model:quantized:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T04:55:39Z |
---
base_model: PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-GGUF/blob/main/PatronusAI-Llama-3-Patronus-Lynx-8B-Instruct-v1.1-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF
|
featherless-ai-quants
| 2024-11-10T19:56:15Z | 21 | 0 | null |
[
"gguf",
"text-generation",
"base_model:OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.3",
"base_model:quantized:OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-09T04:04:01Z |
---
base_model: OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# OwenArli/ArliAI-Llama-3-8B-Dolfin-v0.3 GGUF Quantizations π

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations π
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-GGUF/blob/main/OwenArli-ArliAI-Llama-3-8B-Dolfin-v0.3-Q8_0.gguf) | 8145.11 MB |
---
## β‘ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- π₯ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- π οΈ **Zero Infrastructure** - No server setup or maintenance required
- π **Vast Compatibility** - Support for 2400+ models and counting
- π **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.