modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/TeamUNIVA_-_Komodo_6B_v1.0.0-8bits
|
RichardErkhov
| 2024-11-12T15:41:05Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:37:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Komodo_6B_v1.0.0 - bnb 8bits
- Model creator: https://huggingface.co/TeamUNIVA/
- Original model: https://huggingface.co/TeamUNIVA/Komodo_6B_v1.0.0/
Original model description:
---
license: apache-2.0
language:
- ko
- en
---
# Base Model
beomi/Yi-Ko-6B
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "TeamUNIVA/Komodo_6B_v1.0.0"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = '''<|system|>
๋น์ ์ ์ฌ์ฉ์์ ์ง๋ฌธ์ ์น์ ํ๊ฒ ๋ต๋ณ์ ํ๋ ์ฑ๋ด์
๋๋ค.
<|user|>
์๋
ํ์ธ์?
<|bot|>
'''
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mav23/Llama-3-ELYZA-JP-8B-GGUF
|
mav23
| 2024-11-12T15:38:26Z | 206 | 0 |
transformers
|
[
"transformers",
"gguf",
"ja",
"en",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T14:34:40Z |
---
library_name: transformers
license: llama3
language:
- ja
- en
---
## Llama-3-ELYZA-JP-8B

### Model Description
**Llama-3-ELYZA-JP-8B** is a large language model trained by [ELYZA, Inc](https://elyza.ai/).
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3)
For more details, please refer to [our blog post](https://note.com/elyza/n/n360b6084fdbd).
### Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
DEFAULT_SYSTEM_PROMPT = "ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ็นใซๆ็คบใ็กใๅ ดๅใฏใๅธธใซๆฅๆฌ่ชใงๅ็ญใใฆใใ ใใใ"
text = "ไปไบใฎ็ฑๆใๅใๆปใใใใฎใขใคใใขใ5ใคๆใใฆใใ ใใใ"
model_name = "elyza/Llama-3-ELYZA-JP-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
model.eval()
messages = [
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
token_ids = tokenizer.encode(
prompt, add_special_tokens=False, return_tensors="pt"
)
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=1200,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
output = tokenizer.decode(
output_ids.tolist()[0][token_ids.size(1):], skip_special_tokens=True
)
print(output)
```
### Developers
Listed in alphabetical order.
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
- [Daisuke Oba](https://huggingface.co/daisuk30ba)
- [Sam Passaglia](https://huggingface.co/passaglia)
- [Akira Sasaki](https://huggingface.co/akirasasaki)
### License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
### How to Cite
```tex
@misc{elyzallama2024,
title={elyza/Llama-3-ELYZA-JP-8B},
url={https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B},
author={Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Daisuke Oba and Sam Passaglia and Akira Sasaki},
year={2024},
}
```
### Citations
```tex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
|
RichardErkhov/philschmid_-_Llama-2-7b-hf-4bits
|
RichardErkhov
| 2024-11-12T15:38:08Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2307.09288",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:11:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-hf - bnb 4bits
- Model creator: https://huggingface.co/philschmid/
- Original model: https://huggingface.co/philschmid/Llama-2-7b-hf/
Original model description:
---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software โbug,โ or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
|70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
|
shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF
|
shirubei
| 2024-11-12T15:35:38Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"ja",
"en",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:quantized:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T15:34:46Z |
---
library_name: transformers
license: llama3
language:
- ja
- en
base_model: elyza/Llama-3-ELYZA-JP-8B
tags:
- llama-cpp
- gguf-my-repo
---
# shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`elyza/Llama-3-ELYZA-JP-8B`](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF --hf-file llama-3-elyza-jp-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF --hf-file llama-3-elyza-jp-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF --hf-file llama-3-elyza-jp-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo shirubei/Llama-3-ELYZA-JP-8B-Q4_K_M-GGUF --hf-file llama-3-elyza-jp-8b-q4_k_m.gguf -c 2048
```
|
RichardErkhov/Qwen_-_Qwen2.5-Coder-7B-4bits
|
RichardErkhov
| 2024-11-12T15:34:30Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:30:53Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-Coder-7B - bnb 4bits
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/
Original model description:
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-7B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
RichardErkhov/jpacifico_-_Chocolatine-3B-Instruct-DPO-v1.2-4bits
|
RichardErkhov
| 2024-11-12T15:32:48Z | 5 | 0 | null |
[
"safetensors",
"phi3",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:23:28Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Chocolatine-3B-Instruct-DPO-v1.2 - bnb 4bits
- Model creator: https://huggingface.co/jpacifico/
- Original model: https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2/
Original model description:
---
library_name: transformers
license: mit
language:
- fr
- en
tags:
- french
- chocolatine
datasets:
- jpacifico/french-orca-dpo-pairs-revised
pipeline_tag: text-generation
---
### Chocolatine-3B-Instruct-DPO-v1.2
Best version of Chocolatine-3B for French.
*The model supports 128K context length*.
DPO fine-tuned of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) (3.82B params)
using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) rlhf dataset.
Training in French also improves the model in English, surpassing the performances of its base model.
### MT-Bench-French
Chocolatine-3B-Instruct-DPO-v1.2 is outperforming Phi-3-medium-4k-instruct (14B) and its base model Phi-3.5-mini-instruct on [MT-Bench-French](https://huggingface.co/datasets/bofenghuang/mt-bench-french), used with [multilingual-mt-bench](https://github.com/Peter-Devine/multilingual_mt_bench) and GPT-4-Turbo as LLM-judge.
```
########## First turn ##########
score
model turn
gpt-4o-mini 1 9.2875
Chocolatine-14B-Instruct-4k-DPO 1 8.6375
Chocolatine-14B-Instruct-DPO-v1.2 1 8.6125
Phi-3.5-mini-instruct 1 8.5250
Chocolatine-3B-Instruct-DPO-v1.2 1 8.3750
Phi-3-medium-4k-instruct 1 8.2250
gpt-3.5-turbo 1 8.1375
Chocolatine-3B-Instruct-DPO-Revised 1 7.9875
Daredevil-8B 1 7.8875
Meta-Llama-3.1-8B-Instruct 1 7.0500
vigostral-7b-chat 1 6.7875
Mistral-7B-Instruct-v0.3 1 6.7500
gemma-2-2b-it 1 6.4500
French-Alpaca-7B-Instruct_beta 1 5.6875
vigogne-2-7b-chat 1 5.6625
########## Second turn ##########
score
model turn
gpt-4o-mini 2 8.912500
Chocolatine-14B-Instruct-DPO-v1.2 2 8.337500
Chocolatine-3B-Instruct-DPO-Revised 2 7.937500
Chocolatine-3B-Instruct-DPO-v1.2 2 7.862500
Phi-3-medium-4k-instruct 2 7.750000
Chocolatine-14B-Instruct-4k-DPO 2 7.737500
gpt-3.5-turbo 2 7.679167
Phi-3.5-mini-instruct 2 7.575000
Daredevil-8B 2 7.087500
Meta-Llama-3.1-8B-Instruct 2 6.787500
Mistral-7B-Instruct-v0.3 2 6.500000
vigostral-7b-chat 2 6.162500
gemma-2-2b-it 2 6.100000
French-Alpaca-7B-Instruct_beta 2 5.487395
vigogne-2-7b-chat 2 2.775000
########## Average ##########
score
model
gpt-4o-mini 9.100000
Chocolatine-14B-Instruct-DPO-v1.2 8.475000
Chocolatine-14B-Instruct-4k-DPO 8.187500
Chocolatine-3B-Instruct-DPO-v1.2 8.118750
Phi-3.5-mini-instruct 8.050000
Phi-3-medium-4k-instruct 7.987500
Chocolatine-3B-Instruct-DPO-Revised 7.962500
gpt-3.5-turbo 7.908333
Daredevil-8B 7.487500
Meta-Llama-3.1-8B-Instruct 6.918750
Mistral-7B-Instruct-v0.3 6.625000
vigostral-7b-chat 6.475000
gemma-2-2b-it 6.275000
French-Alpaca-7B-Instruct_beta 5.587866
vigogne-2-7b-chat 4.218750
```
### Usage
You can run this model using my [Colab notebook](https://github.com/jpacifico/Chocolatine-LLM/blob/main/Chocolatine_3B_inference_test_colab.ipynb)
You can also run Chocolatine using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
* **4-bit quantized version** is available here : [jpacifico/Chocolatine-3B-Instruct-DPO-v1.2-Q4_K_M-GGUF](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-v1.2-Q4_K_M-GGUF)
### Limitations
The Chocolatine model is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.
- **Developed by:** Jonathan Pacifico, 2024
- **Model type:** LLM
- **Language(s) (NLP):** French, English
- **License:** MIT
|
RichardErkhov/NYTK_-_PULI-GPTrio-4bits
|
RichardErkhov
| 2024-11-12T15:26:20Z | 5 | 0 | null |
[
"safetensors",
"gpt_neox",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:22:58Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
PULI-GPTrio - bnb 4bits
- Model creator: https://huggingface.co/NYTK/
- Original model: https://huggingface.co/NYTK/PULI-GPTrio/
Original model description:
---
language:
- hu
- en
- zh
tags:
- text-generation
- puli
license: cc-by-nc-4.0
widget:
- text: Elmesรฉlek egy tรถrtรฉnetet a nyelvtechnolรณgiรกrรณl.
---
# PULI GPTrio (7.67B billion parameter)
For further details read [our paper](http://real.mtak.hu/173960/1/TSD_2023_GPT.pdf) or testing our instruct model, see [our demo site](https://juniper.nytud.hu/demo/gptrio).
- Hungarian-English-Chinese trilingual GPT-NeoX model (7.67B billion parameter)
- Trained with EleutherAI's GPT-NeoX [github](https://github.com/EleutherAI/gpt-neox)
- Checkpoint: 410 000 steps
## Dataset
- Hungarian: 41.5 billion words (314 GB)
- English: 61.9 billion words (391 GB)
- Github: 6 million documents (33 GB)
- Chinese: 98.7 billion Chinese character (340 GB)
- (12 billion non Chinese token)
## Limitations
- max_seq_length = 2048
- float16
- vocab size: 150 016
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-puli-gptrio,
title = {Mono- and multilingual GPT-3 models for Hungarian},
booktitle = {Text, Speech, and Dialogue},
year = {2023},
publisher = {Springer Nature Switzerland},
series = {Lecture Notes in Computer Science},
address = {Plzeล, Czech Republic},
author = {Yang, Zijian Gyลzล and Laki, Lรกszlรณ Jรกnos and Vรกradi, Tamรกs and Prรณszรฉky, Gรกbor},
pages = {94--104},
isbn = {978-3-031-40498-6}
}
```
## Usage
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPTrio")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPTrio")
prompt = "Elmesรฉlek egy tรถrtรฉnetet a nyelvtechnolรณgiรกrรณl."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
## Usage with pipeline
```python
from transformers import pipeline, GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPTrio")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPTrio")
prompt = "Elmesรฉlek egy tรถrtรฉnetet a nyelvtechnolรณgiรกrรณl."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(generator(prompt)[0]["generated_text"])
```
|
AnonymousCS/freeze-bert-base-uncased-Twitter-toxicity
|
AnonymousCS
| 2024-11-12T15:24:35Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T16:45:23Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: freeze-bert-base-uncased-Twitter-toxicity
results: []
---
|
AnonymousCS/refined-bert-base-uncased-Twitter-toxicity
|
AnonymousCS
| 2024-11-12T15:24:23Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T16:50:31Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: refined-bert-base-uncased-Twitter-toxicity
results: []
---
|
AnonymousCS/bert-base-cased-Twitter-toxicity
|
AnonymousCS
| 2024-11-12T15:24:10Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-08T18:47:03Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-Twitter-toxicity
results: []
---
|
AnonymousCS/HateBERT-Twitter-toxicity
|
AnonymousCS
| 2024-11-12T15:23:04Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T17:10:09Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HateBERT-Twitter-toxicity
results: []
---
|
RichardErkhov/cocoirun_-_Yi-Ko-6B-instruct-v1.4-8bits
|
RichardErkhov
| 2024-11-12T15:22:13Z | 8 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:18:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-Ko-6B-instruct-v1.4 - bnb 8bits
- Model creator: https://huggingface.co/cocoirun/
- Original model: https://huggingface.co/cocoirun/Yi-Ko-6B-instruct-v1.4/
Original model description:
---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v1.4</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
RichardErkhov/postbot_-_gpt2-medium-emailgen-8bits
|
RichardErkhov
| 2024-11-12T15:14:40Z | 5 | 0 | null |
[
"safetensors",
"gpt2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:14:20Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium-emailgen - bnb 8bits
- Model creator: https://huggingface.co/postbot/
- Original model: https://huggingface.co/postbot/gpt2-medium-emailgen/
Original model description:
---
license:
- apache-2.0
tags:
- text generation
- emailgen
- email generation
- email
datasets:
- aeslc
- postbot/multi-emails-100k
widget:
- text: "Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus will be on the exam"
example_title: "email to prof"
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and ask about office hours"
example_title: "office hours"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning Harold,\n\nI was wondering when the next"
example_title: "event"
- text: "URGENT - I need the TPS reports"
example_title: "URGENT"
- text: "Hi Archibald,\n\nI hope this email finds you extremely well."
example_title: "emails that find you"
- text: "Hello there.\n\nI just wanted to reach out and check in to"
example_title: "checking in"
- text: "Hello <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if you've enjoyed your time with us"
example_title: "work well"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if we could catch up"
example_title: "catch up"
- text: "I'm <NAME> and I just moved into the area and wanted to reach out and get some details on where I could get groceries and"
example_title: "grocery"
parameters:
min_length: 32
max_length: 128
no_repeat_ngram_size: 2
do_sample: True
temperature: 0.3
top_k: 20
top_p: 0.95
repetition_penalty: 3.5
length_penalty: 0.9
---
# gpt2-medium-emailgen
[](https://colab.research.google.com/gist/pszemraj/70058788c6d4b430398c12ee8ba10602/minimal-demo-for-postbot-gpt2-medium-emailgen.ipynb
)
Why write the entire email when you can generate (most of) it?
```python
from transformers import pipeline
model_tag = "postbot/gpt2-medium-emailgen"
generator = pipeline(
'text-generation',
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
max_length=64,
do_sample=False,
early_stopping=True,
) # generate
print(result[0]['generated_text'])
```
## about
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the postbot/multi-emails-100k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5840
## Model description
More information needed
## Intended uses & limitations
- this is intended as a tool to save time writing predictable emails and not to write emails without a human-in-the-loop. validate that your email is factually correct before sending it to others.
## Training and evaluation data
- the dataset is essentially a hand-curated/augmented expansion to the classic `aeslc` dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8701 | 1.0 | 789 | 1.8378 |
| 1.5065 | 2.0 | 1578 | 1.6176 |
| 1.1873 | 3.0 | 2367 | 1.5840 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__gpt2-medium-emailgen)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.97 |
| ARC (25-shot) | 26.45 |
| HellaSwag (10-shot) | 34.31 |
| MMLU (5-shot) | 24.1 |
| TruthfulQA (0-shot) | 43.96 |
| Winogrande (5-shot) | 50.43 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 2.53 |
|
RichardErkhov/postbot_-_gpt2-medium-emailgen-4bits
|
RichardErkhov
| 2024-11-12T15:14:10Z | 5 | 0 | null |
[
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:13:56Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium-emailgen - bnb 4bits
- Model creator: https://huggingface.co/postbot/
- Original model: https://huggingface.co/postbot/gpt2-medium-emailgen/
Original model description:
---
license:
- apache-2.0
tags:
- text generation
- emailgen
- email generation
- email
datasets:
- aeslc
- postbot/multi-emails-100k
widget:
- text: "Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus will be on the exam"
example_title: "email to prof"
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and ask about office hours"
example_title: "office hours"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning Harold,\n\nI was wondering when the next"
example_title: "event"
- text: "URGENT - I need the TPS reports"
example_title: "URGENT"
- text: "Hi Archibald,\n\nI hope this email finds you extremely well."
example_title: "emails that find you"
- text: "Hello there.\n\nI just wanted to reach out and check in to"
example_title: "checking in"
- text: "Hello <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if you've enjoyed your time with us"
example_title: "work well"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if we could catch up"
example_title: "catch up"
- text: "I'm <NAME> and I just moved into the area and wanted to reach out and get some details on where I could get groceries and"
example_title: "grocery"
parameters:
min_length: 32
max_length: 128
no_repeat_ngram_size: 2
do_sample: True
temperature: 0.3
top_k: 20
top_p: 0.95
repetition_penalty: 3.5
length_penalty: 0.9
---
# gpt2-medium-emailgen
[](https://colab.research.google.com/gist/pszemraj/70058788c6d4b430398c12ee8ba10602/minimal-demo-for-postbot-gpt2-medium-emailgen.ipynb
)
Why write the entire email when you can generate (most of) it?
```python
from transformers import pipeline
model_tag = "postbot/gpt2-medium-emailgen"
generator = pipeline(
'text-generation',
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
max_length=64,
do_sample=False,
early_stopping=True,
) # generate
print(result[0]['generated_text'])
```
## about
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the postbot/multi-emails-100k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5840
## Model description
More information needed
## Intended uses & limitations
- this is intended as a tool to save time writing predictable emails and not to write emails without a human-in-the-loop. validate that your email is factually correct before sending it to others.
## Training and evaluation data
- the dataset is essentially a hand-curated/augmented expansion to the classic `aeslc` dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8701 | 1.0 | 789 | 1.8378 |
| 1.5065 | 2.0 | 1578 | 1.6176 |
| 1.1873 | 3.0 | 2367 | 1.5840 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__gpt2-medium-emailgen)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.97 |
| ARC (25-shot) | 26.45 |
| HellaSwag (10-shot) | 34.31 |
| MMLU (5-shot) | 24.1 |
| TruthfulQA (0-shot) | 43.96 |
| Winogrande (5-shot) | 50.43 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 2.53 |
|
AnonymousCS/HateBERT
|
AnonymousCS
| 2024-11-12T15:13:56Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T22:45:10Z |
---
library_name: transformers
tags: []
---
|
RichardErkhov/cocoirun_-_Yi-Ko-6B-instruct-v1.4-4bits
|
RichardErkhov
| 2024-11-12T15:13:54Z | 5 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T15:08:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-Ko-6B-instruct-v1.4 - bnb 4bits
- Model creator: https://huggingface.co/cocoirun/
- Original model: https://huggingface.co/cocoirun/Yi-Ko-6B-instruct-v1.4/
Original model description:
---
license: cc-by-sa-4.0
---
<h1>instruct ๋ชจ๋ธ v1.4</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
Open-Orca-ko ๋ฐ์ดํฐ๋ฅผ ๋ถ์ํ์ฌ ํ์คํฌ๋ฅผ ์ถ์ถํ ๋ค
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ฝ 4๋ง๊ฑด(์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์) ๊ตฌ์ถํ์๊ณ ,
๊ทธ ์ธ์ Open-Orca-Ko์์ ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ํํฐ๋งํ์ฌ ์ ์ ํด๊ฑฐ๋ KoBEST ๋ฐ์ดํฐ๋ฅผ ํจ๊ป ์ถ๊ฐํ์์ต๋๋ค.
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ๋ฅผ ํํ๊ณ ๋ฅผ ํตํด ๋ฒ์ญ ๋ฐ ์ค์ญ๋ ๋ถ๋ถ์ ์ฌ๋์ด ์ง์ ์์ ํ๋ ์์
์ ์ํ
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด 11๋ง๊ฐ์ ํ์ต๋ฐ์ดํฐ๋ก sft๋ฅผ ์งํํ์์ต๋๋ค.
<br>
ํ์ฌ, ์๋ก์ด ๋ฒ์ ์ ๋ชจ๋ธ ํ์ต ๋ฐ ์ฑ๋ฅ์ ์ํด Open-Orca ๋ฐ์ดํฐ์
์ผ๋ถ๋ฅผ ๋ฒ์ญํ์ฌ ์ ์ ์ค์ ์์ต๋๋ค.
<br>
+ ๊ณ ๋ฑํ๊ต ์ญ์ฌ ๋ฌธ์ ๋ฐ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.
+ ๊ฐ์ข
it ์ง์ ๋ฐ์ดํฐ ์ถ๊ฐ์งํ.
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
exala/db_aca2_6.2
|
exala
| 2024-11-12T15:13:34Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-12T15:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/TouchstoneGPT-7B-Instruct-GGUF
|
mradermacher
| 2024-11-12T15:07:00Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"finance",
"text-generation-inference",
"en",
"zh",
"dataset:IDEA-FinAI/Golden-Touchstone",
"base_model:IDEA-FinAI/TouchstoneGPT-7B-Instruct",
"base_model:quantized:IDEA-FinAI/TouchstoneGPT-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-09T00:36:28Z |
---
base_model: IDEA-FinAI/TouchstoneGPT-7B-Instruct
datasets:
- IDEA-FinAI/Golden-Touchstone
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- finance
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/IDEA-FinAI/TouchstoneGPT-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TouchstoneGPT-7B-Instruct-GGUF/resolve/main/TouchstoneGPT-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF
|
mradermacher
| 2024-11-12T15:05:41Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lalainy/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1",
"base_model:quantized:lalainy/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-09T23:26:21Z |
---
base_model: lalainy/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lalainy/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1-GGUF/resolve/main/ECE-PRYMMAL-YL-0.5B-SLERP-BIS-V1.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF
|
mradermacher
| 2024-11-12T15:03:15Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jan-hq/Mistral-7B-Instruct-v0.2-DARE",
"base_model:quantized:jan-hq/Mistral-7B-Instruct-v0.2-DARE",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T19:07:56Z |
---
base_model: jan-hq/Mistral-7B-Instruct-v0.2-DARE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jan-hq/Mistral-7B-Instruct-v0.2-DARE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Instruct-v0.2-DARE-GGUF/resolve/main/Mistral-7B-Instruct-v0.2-DARE.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF
|
mradermacher
| 2024-11-12T15:02:50Z | 24 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"base_model:brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties",
"base_model:quantized:brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-11-11T22:08:38Z |
---
base_model: brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/resolve/main/Capybara-Tess-Yi-34B-200K-DARE-Ties.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF
|
mradermacher
| 2024-11-12T15:01:33Z | 30 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/GutenBerg_Nyxora_magnum-v4-27b",
"base_model:quantized:mergekit-community/GutenBerg_Nyxora_magnum-v4-27b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T03:37:31Z |
---
base_model: mergekit-community/GutenBerg_Nyxora_magnum-v4-27b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/GutenBerg_Nyxora_magnum-v4-27b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q2_K.gguf) | Q2_K | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q5_K_S.gguf) | Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q5_K_M.gguf) | Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q6_K.gguf) | Q6_K | 22.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GutenBerg_Nyxora_magnum-v4-27b-GGUF/resolve/main/GutenBerg_Nyxora_magnum-v4-27b.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-14B-GGUF
|
mradermacher
| 2024-11-12T15:01:25Z | 300 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B",
"base_model:quantized:Qwen/Qwen2.5-Coder-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T04:21:31Z |
---
base_model: Qwen/Qwen2.5-Coder-14B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B/blob/main/LICENSE
no_imatrix: nan detected in blk.47.attn_q.weight
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-14B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF/resolve/main/Qwen2.5-Coder-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF
|
mradermacher
| 2024-11-12T15:01:18Z | 84 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T04:52:35Z |
---
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE
no_imatrix: nan detected in blk.47.attn_q.weight
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-14B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-3B-GGUF
|
mradermacher
| 2024-11-12T15:00:59Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T05:49:02Z |
---
base_model: Qwen/Qwen2.5-Coder-3B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE
license_name: qwen-research
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-3B-GGUF/resolve/main/Qwen2.5-Coder-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF
|
mradermacher
| 2024-11-12T15:00:52Z | 48 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T06:07:03Z |
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-7B-GGUF
|
mradermacher
| 2024-11-12T14:59:48Z | 114 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T05:54:56Z |
---
base_model: Qwen/Qwen2.5-Coder-7B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF/resolve/main/Qwen2.5-Coder-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-7B-i1-GGUF
|
mradermacher
| 2024-11-12T14:59:48Z | 98 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T13:49:15Z |
---
base_model: Qwen/Qwen2.5-Coder-7B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Coder-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-7B-i1-GGUF/resolve/main/Qwen2.5-Coder-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF
|
mradermacher
| 2024-11-12T14:58:09Z | 34 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T14:42:45Z |
---
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
xared/Kaufland_llama_3_2_1B
|
xared
| 2024-11-12T14:56:34Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T14:54:36Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/Llama-3.2-1B-Instruct
---
# Uploaded model
- **Developed by:** xared
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B
|
zelk12
| 2024-11-12T14:54:57Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:recoilme/recoilme-gemma-2-9B-v0.4",
"base_model:merge:recoilme/recoilme-gemma-2-9B-v0.4",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T14:48:33Z |
---
base_model:
- recoilme/recoilme-gemma-2-9B-v0.4
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [recoilme/recoilme-gemma-2-9B-v0.4](https://huggingface.co/recoilme/recoilme-gemma-2-9B-v0.4)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: recoilme/recoilme-gemma-2-9B-v0.4
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
merge_method: slerp
base_model: recoilme/recoilme-gemma-2-9B-v0.4
dtype: bfloat16
parameters:
t: 0.25
```
|
RichardErkhov/MikeMpapa_-_4_bar_lmd_clean_custom_test3-8bits
|
RichardErkhov
| 2024-11-12T14:53:32Z | 5 | 0 | null |
[
"safetensors",
"gpt2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:53:17Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
4_bar_lmd_clean_custom_test3 - bnb 8bits
- Model creator: https://huggingface.co/MikeMpapa/
- Original model: https://huggingface.co/MikeMpapa/4_bar_lmd_clean_custom_test3/
Original model description:
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: 4_bar_lmd_clean_custom_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4_bar_lmd_clean_custom_test3
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 48
- eval_batch_size: 32
- seed: 1
- gradient_accumulation_steps: 2
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.8709 | 1.82 | 10 | 5.7363 |
| 5.6849 | 3.64 | 20 | 5.4321 |
| 5.4501 | 5.45 | 30 | 5.3610 |
| 5.359 | 7.27 | 40 | 5.2833 |
| 5.278 | 9.09 | 50 | 5.1274 |
| 5.1335 | 10.91 | 60 | 5.0075 |
| 5.0548 | 12.73 | 70 | 4.9488 |
| 4.958 | 14.55 | 80 | 4.8213 |
| 4.8511 | 16.36 | 90 | 4.7643 |
| 4.8158 | 18.18 | 100 | 4.7202 |
| 4.7548 | 20.0 | 110 | 4.6591 |
| 4.7269 | 21.82 | 120 | 4.6380 |
| 4.6823 | 23.64 | 130 | 4.6200 |
| 4.6757 | 25.45 | 140 | 4.6081 |
| 4.629 | 27.27 | 150 | 4.6285 |
| 4.6398 | 29.09 | 160 | 4.6024 |
| 4.6111 | 30.91 | 170 | 4.6235 |
| 4.6028 | 32.73 | 180 | 4.5945 |
| 4.577 | 34.55 | 190 | 4.5932 |
| 4.5812 | 36.36 | 200 | 4.5689 |
| 4.5583 | 38.18 | 210 | 4.5713 |
| 4.5567 | 40.0 | 220 | 4.5731 |
| 4.55 | 41.82 | 230 | 4.5619 |
| 4.5338 | 43.64 | 240 | 4.5656 |
| 4.5245 | 45.45 | 250 | 4.5494 |
| 4.5143 | 47.27 | 260 | 4.5578 |
| 4.5339 | 49.09 | 270 | 4.5489 |
| 4.4948 | 50.91 | 280 | 4.5746 |
| 4.5 | 52.73 | 290 | 4.5407 |
| 4.4755 | 54.55 | 300 | 4.5448 |
| 4.4736 | 56.36 | 310 | 4.5311 |
| 4.4584 | 58.18 | 320 | 4.5279 |
| 4.465 | 60.0 | 330 | 4.5339 |
| 4.4511 | 61.82 | 340 | 4.5326 |
| 4.4408 | 63.64 | 350 | 4.5163 |
| 4.4314 | 65.45 | 360 | 4.5193 |
| 4.417 | 67.27 | 370 | 4.5161 |
| 4.424 | 69.09 | 380 | 4.5027 |
| 4.4147 | 70.91 | 390 | 4.5044 |
| 4.3938 | 72.73 | 400 | 4.5012 |
| 4.4001 | 74.55 | 410 | 4.5037 |
| 4.3821 | 76.36 | 420 | 4.5006 |
| 4.383 | 78.18 | 430 | 4.4981 |
| 4.3893 | 80.0 | 440 | 4.4942 |
| 4.3684 | 81.82 | 450 | 4.4927 |
| 4.3788 | 83.64 | 460 | 4.4933 |
| 4.3836 | 85.45 | 470 | 4.4929 |
| 4.3766 | 87.27 | 480 | 4.4917 |
| 4.3871 | 89.09 | 490 | 4.4912 |
| 4.3725 | 90.91 | 500 | 4.4912 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.1
|
RikvanSchaick/bert-finetuned-ner_trial_base
|
RikvanSchaick
| 2024-11-12T14:51:30Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T14:10:56Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner_trial_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_trial_base
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.3021 | 0.3275 | 0.3065 | 0.3166 | 0.9256 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF
|
mradermacher
| 2024-11-12T14:51:20Z | 652 | 1 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:Ttimofeyka/Tissint-14B-v1.1-128k-RP",
"base_model:quantized:Ttimofeyka/Tissint-14B-v1.1-128k-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T14:03:16Z |
---
base_model: Ttimofeyka/Tissint-14B-v1.1-128k-RP
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Ttimofeyka/Tissint-14B-v1.1-128k-RP
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tissint-14B-v1.1-128k-RP-i1-GGUF/resolve/main/Tissint-14B-v1.1-128k-RP.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zelk12/MT2-Gen2-MU-gemma-2-Rv0.4N3N1532-9B
|
zelk12
| 2024-11-12T14:46:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:merge:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:recoilme/recoilme-gemma-2-9B-v0.4",
"base_model:merge:recoilme/recoilme-gemma-2-9B-v0.4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T14:27:30Z |
---
base_model:
- nhyha/N3N_gemma-2-9b-it_20241029_1532
- recoilme/recoilme-gemma-2-9B-v0.4
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nhyha/N3N_gemma-2-9b-it_20241029_1532](https://huggingface.co/nhyha/N3N_gemma-2-9b-it_20241029_1532)
* [recoilme/recoilme-gemma-2-9B-v0.4](https://huggingface.co/recoilme/recoilme-gemma-2-9B-v0.4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: recoilme/recoilme-gemma-2-9B-v0.4
- model: nhyha/N3N_gemma-2-9b-it_20241029_1532
merge_method: slerp
base_model: recoilme/recoilme-gemma-2-9B-v0.4
dtype: bfloat16
parameters:
t: 0.25
```
|
RichardErkhov/wkshin89_-_yi-ko-6b-instruct-test-v0.2-8bits
|
RichardErkhov
| 2024-11-12T14:44:48Z | 6 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:41:02Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
yi-ko-6b-instruct-test-v0.2 - bnb 8bits
- Model creator: https://huggingface.co/wkshin89/
- Original model: https://huggingface.co/wkshin89/yi-ko-6b-instruct-test-v0.2/
Original model description:
---
license: cc-by-nc-4.0
---
|
mradermacher/Qwen2.5-Coder-1.5B-GGUF
|
mradermacher
| 2024-11-12T14:43:23Z | 126 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:quantized:Qwen/Qwen2.5-Coder-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T05:39:04Z |
---
base_model: Qwen/Qwen2.5-Coder-1.5B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
djuna/Q2.5-Partron-7B
|
djuna
| 2024-11-12T14:30:09Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Locutusque/StockQwen-2.5-7B",
"base_model:merge:Locutusque/StockQwen-2.5-7B",
"base_model:djuna/Q2.5-Fuppavy-7B",
"base_model:merge:djuna/Q2.5-Fuppavy-7B",
"base_model:fblgit/cybertron-v4-qw7B-MGS",
"base_model:merge:fblgit/cybertron-v4-qw7B-MGS",
"base_model:happzy2633/qwen2.5-7b-ins-v3",
"base_model:merge:happzy2633/qwen2.5-7b-ins-v3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-08T11:02:31Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Locutusque/StockQwen-2.5-7B
- djuna/Q2.5-Fuppavy-7B
- fblgit/cybertron-v4-qw7B-MGS
- happzy2633/qwen2.5-7b-ins-v3
model-index:
- name: Q2.5-Partron-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 73.21
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.08
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.38
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.07
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 36.47
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna/Q2.5-Partron-7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della merge method using [djuna/Q2.5-Fuppavy-7B](https://huggingface.co/djuna/Q2.5-Fuppavy-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Locutusque/StockQwen-2.5-7B](https://huggingface.co/Locutusque/StockQwen-2.5-7B)
* [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS)
* [happzy2633/qwen2.5-7b-ins-v3](https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Locutusque/StockQwen-2.5-7B
parameters:
weight: 0.5
density: 0.5
- model: happzy2633/qwen2.5-7b-ins-v3
parameters:
weight: 0.3
density: 1
- model: fblgit/cybertron-v4-qw7B-MGS
parameters:
weight: 1
density: 0.8
merge_method: della
base_model: djuna/Q2.5-Fuppavy-7B
parameters:
epsilon: 0.04
lambda: 1.05
dtype: float32
out_dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_djuna__Q2.5-Partron-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |27.08|
|IFEval (0-Shot) |73.21|
|BBH (3-Shot) |35.26|
|MATH Lvl 5 (4-Shot)| 0.08|
|GPQA (0-shot) | 6.38|
|MuSR (0-shot) |11.07|
|MMLU-PRO (5-shot) |36.47|
|
mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF
|
mradermacher
| 2024-11-12T14:28:09Z | 101 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T05:32:13Z |
---
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF
|
mradermacher
| 2024-11-12T14:27:16Z | 109 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T03:09:47Z |
---
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-32B-Instruct-GGUF/resolve/main/Qwen2.5-Coder-32B-Instruct.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF
|
featherless-ai-quants
| 2024-11-12T14:25:37Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"base_model:SteelStorage/AbL3In-15B",
"base_model:quantized:SteelStorage/AbL3In-15B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T13:59:49Z |
---
base_model: TheSkullery/AbL3In-15B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# TheSkullery/AbL3In-15B GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [TheSkullery-AbL3In-15B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-IQ4_XS.gguf) | 7868.64 MB |
| Q2_K | [TheSkullery-AbL3In-15B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q2_K.gguf) | 5480.87 MB |
| Q3_K_L | [TheSkullery-AbL3In-15B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q3_K_L.gguf) | 7609.76 MB |
| Q3_K_M | [TheSkullery-AbL3In-15B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q3_K_M.gguf) | 7030.76 MB |
| Q3_K_S | [TheSkullery-AbL3In-15B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q3_K_S.gguf) | 6355.76 MB |
| Q4_K_M | [TheSkullery-AbL3In-15B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q4_K_M.gguf) | 8685.29 MB |
| Q4_K_S | [TheSkullery-AbL3In-15B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q4_K_S.gguf) | 8248.29 MB |
| Q5_K_M | [TheSkullery-AbL3In-15B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q5_K_M.gguf) | 10171.92 MB |
| Q5_K_S | [TheSkullery-AbL3In-15B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q5_K_S.gguf) | 9916.92 MB |
| Q6_K | [TheSkullery-AbL3In-15B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q6_K.gguf) | 11751.46 MB |
| Q8_0 | [TheSkullery-AbL3In-15B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/TheSkullery-AbL3In-15B-GGUF/blob/main/TheSkullery-AbL3In-15B-Q8_0.gguf) | 15218.13 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B
|
zelk12
| 2024-11-12T14:24:48Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ifable-9B-v0.1",
"base_model:merge:zelk12/recoilme-gemma-2-Ifable-9B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T14:18:28Z |
---
base_model:
- zelk12/MT-Merge-gemma-2-9B
- zelk12/recoilme-gemma-2-Ifable-9B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
* [zelk12/recoilme-gemma-2-Ifable-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ifable-9B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-Ifable-9B-v0.1
- model: zelk12/MT-Merge-gemma-2-9B
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-Ifable-9B-v0.1
dtype: bfloat16
parameters:
t: 0.25
```
|
mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF
|
mradermacher
| 2024-11-12T14:23:35Z | 56 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T14:19:00Z |
---
base_model: Qwen/Qwen2.5-Coder-0.5B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 0.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 0.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 0.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-i1-GGUF/resolve/main/Qwen2.5-Coder-0.5B.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MrFx/speecht5_finetuned_emirhan_tr
|
MrFx
| 2024-11-12T14:22:52Z | 81 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-12T14:03:58Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.7371 | 11.1111 | 100 | 0.6814 |
| 0.628 | 22.2222 | 200 | 0.6212 |
| 0.5724 | 33.3333 | 300 | 0.6452 |
| 0.5524 | 44.4444 | 400 | 0.6329 |
| 0.5257 | 55.5556 | 500 | 0.6401 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
sandeepaffine/CPT-v-1-merged
|
sandeepaffine
| 2024-11-12T14:18:43Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-12T14:15:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Intel_-_neural-chat-7b-v3-3-8bits
|
RichardErkhov
| 2024-11-12T14:17:11Z | 5 | 0 | null |
[
"safetensors",
"mistral",
"arxiv:2309.12284",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:12:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
neural-chat-7b-v3-3 - bnb 8bits
- Model creator: https://huggingface.co/Intel/
- Original model: https://huggingface.co/Intel/neural-chat-7b-v3-3/
Original model description:
---
license: apache-2.0
tags:
- LLMs
- mistral
- math
- Intel
base_model: Intel/neural-chat-7b-v3-1
model-index:
- name: neural-chat-7b-v3-3
results:
- task:
type: Large Language Model
name: Large Language Model
dataset:
name: meta-math/MetaMathQA
type: meta-math/MetaMathQA
metrics:
- type: ARC (25-shot)
value: 66.89
name: ARC (25-shot)
verified: true
- type: HellaSwag (10-shot)
value: 85.26
name: HellaSwag (10-shot)
verified: true
- type: MMLU (5-shot)
value: 63.07
name: MMLU (5-shot)
verified: true
- type: TruthfulQA (0-shot)
value: 63.01
name: TruthfulQA (0-shot)
verified: true
- type: Winogrande (5-shot)
value: 79.64
name: Winogrande (5-shot)
verified: true
- type: GSM8K (5-shot)
value: 61.11
name: GSM8K (5-shot)
verified: true
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
---
## Model Details: Neural-Chat-v3-3
This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) was originally fine-tuned from [mistralai/Mistral-7B-v-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, refer to the blog [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/ctASHUT5QYIxMsOFa-sHC.webp" width="500"/>
Photo by Google DeepMind on Unsplash
</p>
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.|
| Date | December, 2023 |
| Version | v3-3 |
| Type | 7B Large Language Model |
| Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-3/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model is doing. |
| Primary intended users | Anyone doing inference on language-related tasks. |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
## How To Use
Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Reproduce the model
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
```bash
git clone https://github.com/intel/intel-extension-for-transformers.git
cd intel-extension-for-transformers
docker build --no-cache ./ --target hpu --build-arg REPO=https://github.com/intel/intel-extension-for-transformers.git --build-arg ITREX_VER=main -f ./intel_extension_for_transformers/neural_chat/docker/Dockerfile -t chatbot_finetuning:latest
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host chatbot_finetuning:latest
# after entering docker container
cd examples/finetuning/finetune_neuralchat_v3
```
We select the latest pretrained mistralai/Mistral-7B-v0.1 and the open source dataset Open-Orca/SlimOrca to conduct the experiment.
The below script use deepspeed zero2 to lanuch the training with 8 cards Gaudi2. In the `finetune_neuralchat_v3.py`, the default `use_habana=True, use_lazy_mode=True, device="hpu"` for Gaudi2. And if you want to run it on NVIDIA GPU, you can set them `use_habana=False, use_lazy_mode=False, device="auto"`.
```python
deepspeed --include localhost:0,1,2,3,4,5,6,7 \
--master_port 29501 \
finetune_neuralchat_v3.py
```
Merge the LoRA weights:
```python
python apply_lora.py \
--base-model-path mistralai/Mistral-7B-v0.1 \
--lora-model-path finetuned_model/ \
--output-path finetuned_model_lora
```
### Use the model
### FP32 Inference with Transformers
```python
import transformers
model_name = 'Intel/neural-chat-7b-v3-3'
model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
def generate_response(system_input, user_input):
# Format the input using the provided template
prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n"
# Tokenize and encode the prompt
inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False)
# Generate a response
outputs = model.generate(inputs, max_length=1000, num_return_sequences=1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract only the assistant's response
return response.split("### Assistant:\n")[-1]
# Example usage
system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer."
user_input = "calculate 100 + 520 + 60"
response = generate_response(system_input, user_input)
print(response)
# expected response
"""
To calculate the sum of 100, 520, and 60, we will follow these steps:
1. Add the first two numbers: 100 + 520
2. Add the result from step 1 to the third number: (100 + 520) + 60
Step 1: Add 100 and 520
100 + 520 = 620
Step 2: Add the result from step 1 to the third number (60)
(620) + 60 = 680
So, the sum of 100, 520, and 60 is 680.
"""
```
### BF16 Inference with Intel Extension for Transformers and Intel Extension for Pytorch
```python
from transformers import AutoTokenizer, TextStreamer
import torch
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
import intel_extension_for_pytorch as ipex
model_name = "Intel/neural-chat-7b-v3-3"
prompt = "Once upon a time, there existed a little girl,"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
model = ipex.optimize(model.eval(), dtype=torch.bfloat16, inplace=True, level="O1", auto_kernel_selection=True)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
```
### INT4 Inference with Transformers and Intel Extension for Transformers
```python
from transformers import AutoTokenizer, TextStreamer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig
model_name = "Intel/neural-chat-7b-v3-3"
# for int8, should set weight_dtype="int8"
config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4")
prompt = "Once upon a time, there existed a little girl,"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
```
| Factors | Description |
| ----------- | ----------- |
| Groups | More details about the dataset and annotations can be found at [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), the project page https://meta-math.github.io/, and the associated paper at https://arxiv.org/abs/2309.12284. |
| Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. |
| Environment | The model was trained on the Intel Gaudi 2 processor (8 cards). |
| Card Prompts | Model deployment on alternate hardware and software will change model performance. The model evaluation factors are from the Hugging Face LLM leaderboard: ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K (see Quantitative Analyses below). |
| Metrics | Description |
| ----------- | ----------- |
| Model performance measures | The model performance was evaluated against other LLMs according to the measures on the LLM leaderboard. These were selected as this has become the standard for LLM performance. |
| Decision thresholds | No decision thresholds were used. |
| Approaches to uncertainty and variability | - |
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | The training data are from [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), which is augmented from the GSM8k and MATH training sets. There is no contamination from the GSM8k test set, as this was left out during training.|
| Motivation | - |
| Preprocessing | - |
## Quantitative Analyses
The Open LLM Leaderboard results can be found here: [https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3). The metrics came out to:
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 69.83 |
| ARC (25-shot) | 66.89 |
| HellaSwag (10-shot) | 85.26 |
| MMLU (5-shot) | 63.07 |
| TruthfulQA (0-shot) | 63.01 |
| Winogrande (5-shot) | 79.64 |
| GSM8K (5-shot) | 61.11 |
## Ethical Considerations and Limitations
Neural-chat-7b-v3-3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of neural-chat-7b-v3-3, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.83|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |63.07|
|TruthfulQA (0-shot) |63.01|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |61.11|
|
RichardErkhov/realtreetune_-_rho-1b-sft-MATH-8bits
|
RichardErkhov
| 2024-11-12T14:16:48Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2410.01679",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:15:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-1b-sft-MATH - bnb 8bits
- Model creator: https://huggingface.co/realtreetune/
- Original model: https://huggingface.co/realtreetune/rho-1b-sft-MATH/
Original model description:
---
library_name: transformers
base_model:
- microsoft/rho-math-1b-v0.1
---
# SFT Checkpoint for Rho 1B on MATH
Refer to [https://arxiv.org/abs/2410.01679](https://arxiv.org/abs/2410.01679) for more info.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tom-010/judge_answer___36_deberta_base_ensample-04
|
tom-010
| 2024-11-12T14:16:43Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-12T14:16:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tom-010/judge_answer___36_deberta_base_ensample-03
|
tom-010
| 2024-11-12T14:16:17Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-12T14:15:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tom-010/judge_answer___36_deberta_base_ensample-02
|
tom-010
| 2024-11-12T14:15:49Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-11-12T14:15:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/realtreetune_-_rho-1b-sft-MATH-4bits
|
RichardErkhov
| 2024-11-12T14:15:23Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2410.01679",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:14:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-1b-sft-MATH - bnb 4bits
- Model creator: https://huggingface.co/realtreetune/
- Original model: https://huggingface.co/realtreetune/rho-1b-sft-MATH/
Original model description:
---
library_name: transformers
base_model:
- microsoft/rho-math-1b-v0.1
---
# SFT Checkpoint for Rho 1B on MATH
Refer to [https://arxiv.org/abs/2410.01679](https://arxiv.org/abs/2410.01679) for more info.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
procit006/training_tts_nl_v1.0.6_saskia3
|
procit006
| 2024-11-12T14:14:28Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-12T14:13:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zelk12/MT2-Gen2-MA-gemma-2-N3N1532Av0.1r0.25-9B
|
zelk12
| 2024-11-12T14:11:07Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:merge:nhyha/N3N_gemma-2-9b-it_20241029_1532",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T14:04:39Z |
---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- nhyha/N3N_gemma-2-9b-it_20241029_1532
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [nhyha/N3N_gemma-2-9b-it_20241029_1532](https://huggingface.co/nhyha/N3N_gemma-2-9b-it_20241029_1532)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nhyha/N3N_gemma-2-9b-it_20241029_1532
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
merge_method: slerp
base_model: nhyha/N3N_gemma-2-9b-it_20241029_1532
dtype: bfloat16
parameters:
t: 0.25
```
|
RikvanSchaick/bert-finetuned-ner_trial5
|
RikvanSchaick
| 2024-11-12T14:09:29Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T13:05:17Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner_trial5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_trial5
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 32 | 0.7258 | 0.0 | 0.0 | 0.0 | 0.9030 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
RichardErkhov/Intel_-_neural-chat-7b-v3-3-4bits
|
RichardErkhov
| 2024-11-12T14:09:12Z | 5 | 0 | null |
[
"safetensors",
"mistral",
"arxiv:2309.12284",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:06:42Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
neural-chat-7b-v3-3 - bnb 4bits
- Model creator: https://huggingface.co/Intel/
- Original model: https://huggingface.co/Intel/neural-chat-7b-v3-3/
Original model description:
---
license: apache-2.0
tags:
- LLMs
- mistral
- math
- Intel
base_model: Intel/neural-chat-7b-v3-1
model-index:
- name: neural-chat-7b-v3-3
results:
- task:
type: Large Language Model
name: Large Language Model
dataset:
name: meta-math/MetaMathQA
type: meta-math/MetaMathQA
metrics:
- type: ARC (25-shot)
value: 66.89
name: ARC (25-shot)
verified: true
- type: HellaSwag (10-shot)
value: 85.26
name: HellaSwag (10-shot)
verified: true
- type: MMLU (5-shot)
value: 63.07
name: MMLU (5-shot)
verified: true
- type: TruthfulQA (0-shot)
value: 63.01
name: TruthfulQA (0-shot)
verified: true
- type: Winogrande (5-shot)
value: 79.64
name: Winogrande (5-shot)
verified: true
- type: GSM8K (5-shot)
value: 61.11
name: GSM8K (5-shot)
verified: true
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Intel/neural-chat-7b-v3-3
name: Open LLM Leaderboard
---
## Model Details: Neural-Chat-v3-3
This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) was originally fine-tuned from [mistralai/Mistral-7B-v-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, refer to the blog [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6297f0e30bd2f58c647abb1d/ctASHUT5QYIxMsOFa-sHC.webp" width="500"/>
Photo by Google DeepMind on Unsplash
</p>
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel. The NeuralChat team with members from DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.|
| Date | December, 2023 |
| Version | v3-3 |
| Type | 7B Large Language Model |
| Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-3/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model is doing. |
| Primary intended users | Anyone doing inference on language-related tasks. |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
## How To Use
Context length for this model: 8192 tokens (same as https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Reproduce the model
Here is the sample code to reproduce the model: [GitHub sample code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3). Here is the documentation to reproduce building the model:
```bash
git clone https://github.com/intel/intel-extension-for-transformers.git
cd intel-extension-for-transformers
docker build --no-cache ./ --target hpu --build-arg REPO=https://github.com/intel/intel-extension-for-transformers.git --build-arg ITREX_VER=main -f ./intel_extension_for_transformers/neural_chat/docker/Dockerfile -t chatbot_finetuning:latest
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host chatbot_finetuning:latest
# after entering docker container
cd examples/finetuning/finetune_neuralchat_v3
```
We select the latest pretrained mistralai/Mistral-7B-v0.1 and the open source dataset Open-Orca/SlimOrca to conduct the experiment.
The below script use deepspeed zero2 to lanuch the training with 8 cards Gaudi2. In the `finetune_neuralchat_v3.py`, the default `use_habana=True, use_lazy_mode=True, device="hpu"` for Gaudi2. And if you want to run it on NVIDIA GPU, you can set them `use_habana=False, use_lazy_mode=False, device="auto"`.
```python
deepspeed --include localhost:0,1,2,3,4,5,6,7 \
--master_port 29501 \
finetune_neuralchat_v3.py
```
Merge the LoRA weights:
```python
python apply_lora.py \
--base-model-path mistralai/Mistral-7B-v0.1 \
--lora-model-path finetuned_model/ \
--output-path finetuned_model_lora
```
### Use the model
### FP32 Inference with Transformers
```python
import transformers
model_name = 'Intel/neural-chat-7b-v3-3'
model = transformers.AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
def generate_response(system_input, user_input):
# Format the input using the provided template
prompt = f"### System:\n{system_input}\n### User:\n{user_input}\n### Assistant:\n"
# Tokenize and encode the prompt
inputs = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False)
# Generate a response
outputs = model.generate(inputs, max_length=1000, num_return_sequences=1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract only the assistant's response
return response.split("### Assistant:\n")[-1]
# Example usage
system_input = "You are a math expert assistant. Your mission is to help users understand and solve various math problems. You should provide step-by-step solutions, explain reasonings and give the correct answer."
user_input = "calculate 100 + 520 + 60"
response = generate_response(system_input, user_input)
print(response)
# expected response
"""
To calculate the sum of 100, 520, and 60, we will follow these steps:
1. Add the first two numbers: 100 + 520
2. Add the result from step 1 to the third number: (100 + 520) + 60
Step 1: Add 100 and 520
100 + 520 = 620
Step 2: Add the result from step 1 to the third number (60)
(620) + 60 = 680
So, the sum of 100, 520, and 60 is 680.
"""
```
### BF16 Inference with Intel Extension for Transformers and Intel Extension for Pytorch
```python
from transformers import AutoTokenizer, TextStreamer
import torch
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
import intel_extension_for_pytorch as ipex
model_name = "Intel/neural-chat-7b-v3-3"
prompt = "Once upon a time, there existed a little girl,"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
model = ipex.optimize(model.eval(), dtype=torch.bfloat16, inplace=True, level="O1", auto_kernel_selection=True)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
```
### INT4 Inference with Transformers and Intel Extension for Transformers
```python
from transformers import AutoTokenizer, TextStreamer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig
model_name = "Intel/neural-chat-7b-v3-3"
# for int8, should set weight_dtype="int8"
config = WeightOnlyQuantConfig(compute_dtype="bf16", weight_dtype="int4")
prompt = "Once upon a time, there existed a little girl,"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
```
| Factors | Description |
| ----------- | ----------- |
| Groups | More details about the dataset and annotations can be found at [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), the project page https://meta-math.github.io/, and the associated paper at https://arxiv.org/abs/2309.12284. |
| Instrumentation | The performance of the model can vary depending on the inputs to the model. In this case, the prompts provided can drastically change the prediction of the language model. |
| Environment | The model was trained on the Intel Gaudi 2 processor (8 cards). |
| Card Prompts | Model deployment on alternate hardware and software will change model performance. The model evaluation factors are from the Hugging Face LLM leaderboard: ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K (see Quantitative Analyses below). |
| Metrics | Description |
| ----------- | ----------- |
| Model performance measures | The model performance was evaluated against other LLMs according to the measures on the LLM leaderboard. These were selected as this has become the standard for LLM performance. |
| Decision thresholds | No decision thresholds were used. |
| Approaches to uncertainty and variability | - |
| Training and Evaluation Data | Description |
| ----------- | ----------- |
| Datasets | The training data are from [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), which is augmented from the GSM8k and MATH training sets. There is no contamination from the GSM8k test set, as this was left out during training.|
| Motivation | - |
| Preprocessing | - |
## Quantitative Analyses
The Open LLM Leaderboard results can be found here: [https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3). The metrics came out to:
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 69.83 |
| ARC (25-shot) | 66.89 |
| HellaSwag (10-shot) | 85.26 |
| MMLU (5-shot) | 63.07 |
| TruthfulQA (0-shot) | 63.01 |
| Winogrande (5-shot) | 79.64 |
| GSM8K (5-shot) | 61.11 |
## Ethical Considerations and Limitations
Neural-chat-7b-v3-3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of neural-chat-7b-v3-3, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.83|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.26|
|MMLU (5-Shot) |63.07|
|TruthfulQA (0-shot) |63.01|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |61.11|
|
RichardErkhov/Ttimofeyka_-_bitnet-5B-v0-4bits
|
RichardErkhov
| 2024-11-12T14:07:02Z | 5 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T14:04:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bitnet-5B-v0 - bnb 4bits
- Model creator: https://huggingface.co/Ttimofeyka/
- Original model: https://huggingface.co/Ttimofeyka/bitnet-5B-v0/
Original model description:
---
license: mit
---
This model is my starting point zero for trying to finetune model based on bitnet architecture.
I just added new layers with random weights to the finished model.
Maybe it can be broken.
It is not recommended for use: the results show an improvement in test results at the margin of error.
|
Obrempong77/Gemma-2-9b-it-chat-doctor-kaggleX
|
Obrempong77
| 2024-11-12T14:01:16Z | 12 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T13:56:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-8bits
|
RichardErkhov
| 2024-11-12T13:59:41Z | 5 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-12T13:59:13Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qllama-tiny-.5B-test-1 - bnb 8bits
- Model creator: https://huggingface.co/Josephgflowers/
- Original model: https://huggingface.co/Josephgflowers/Qllama-tiny-.5B-test-1/
Original model description:
---
license: mit
---
Llamafyd version of Qwen .5B further fine tuned on wiki, math, science, and chat datasets.
Based on Cinder data.
As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
|
codingfaf/paraSci_T5_small
|
codingfaf
| 2024-11-12T13:54:43Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-28T21:29:17Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: codingfaf/paraSci_T5_small
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# codingfaf/paraSci_T5_small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an ParaSci paraphrasing dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4091
- Validation Loss: 2.2750
- Epoch: 4
It achieves BLEU Score of 0.46
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7479 | 2.4609 | 0 |
| 2.5657 | 2.3795 | 1 |
| 2.4946 | 2.3358 | 2 |
| 2.4481 | 2.3018 | 3 |
| 2.4091 | 2.2750 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
hueda2214/bert-base-japanese-v3-ner-wikipedia-crf-ner
|
hueda2214
| 2024-11-12T13:53:53Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T13:53:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Code-290k-6.7B-Instruct-i1-GGUF
|
mradermacher
| 2024-11-12T13:53:51Z | 36 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"en",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"base_model:ajibawa-2023/Code-290k-6.7B-Instruct",
"base_model:quantized:ajibawa-2023/Code-290k-6.7B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T11:11:42Z |
---
base_model: ajibawa-2023/Code-290k-6.7B-Instruct
datasets:
- ajibawa-2023/Code-290k-ShareGPT
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ajibawa-2023/Code-290k-6.7B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Code-290k-6.7B-Instruct-i1-GGUF/resolve/main/Code-290k-6.7B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ODeNy/Checket_Antwerpen_Huisstijl_MiniLM
|
ODeNy
| 2024-11-12T13:52:07Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"generated_from_trainer",
"dataset_size:24593",
"loss:CoSENTLoss",
"nl",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-11T20:12:25Z |
---
tags:
- sentence-transformers
- sentence-similarity
- generated_from_trainer
- dataset_size:24593
- loss:CoSENTLoss
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
model-index:
- name: >-
SentenceTransformer based on
sentence-transformers/finetuned_paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.03594393239556079
name: Pearson Cosine
- type: spearman_cosine
value: -0.00047007527052389596
name: Spearman Cosine
- type: pearson_manhattan
value: 0.02486157492330912
name: Pearson Manhattan
- type: spearman_manhattan
value: -0.002126248151952068
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.024692776461385596
name: Pearson Euclidean
- type: spearman_euclidean
value: -0.0020342683424227027
name: Spearman Euclidean
- type: pearson_dot
value: -0.005055107350691934
name: Pearson Dot
- type: spearman_dot
value: 0.0015424580293819054
name: Spearman Dot
- type: pearson_max
value: 0.03594393239556079
name: Pearson Max
- type: spearman_max
value: 0.0015424580293819054
name: Spearman Max
license: mit
language:
- nl
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for stylistic and semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. I personally used this to give LLM generated sentences a rating between 0 and 1 on how good they match the style of the city of Antwerp.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** Dutch, Flemish
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'"Daarnaast willen ze hun bestaande platform DETECT, waarmee onderzoekers unieke inzichten kunnen verwerven in de respons tegen een vaccin, commercialiseren."',
'"Ze zijn van plan om het platform DETECT, dat onderzoekers helpt bij het verkrijgen van unieke inzichten over hoe een vaccin reageert, verder te ontwikkelen en commercieel beschikbaar te maken."',
'"In februari 2020 hield buurtcomit Stadspark een eerste gesprek over het Stadspark."',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:------------|
| pearson_cosine | 0.0359 |
| **spearman_cosine** | **-0.0005** |
| pearson_manhattan | 0.0249 |
| spearman_manhattan | -0.0021 |
| pearson_euclidean | 0.0247 |
| spearman_euclidean | -0.002 |
| pearson_dot | -0.0051 |
| spearman_dot | 0.0015 |
| pearson_max | 0.0359 |
| spearman_max | 0.0015 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,593 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 18 tokens</li><li>mean: 34.72 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 34.48 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.63</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>"Bij een noodsituatie zoals een grote brand, een overstroming of een stroomonderbreking stuurt BE-Alert automatisch berichten uit."</code> | <code>"In een noodgeval zoals een grote brand, een overstroming of een stroomuitval, waarschuwt BE-Alert ons direct via sms."</code> | <code>1.0</code> |
| <code>"Nationale test BE-Alert 18 steden en gemeenten in de provincie Antwerpen namen deel aan de nationale test op donderdag 7 oktober 2021."</code> | <code>"In de provincie Antwerpen deden 18 stadsdelen en districten mee aan de nationale test van BE-Alert op donderdag 7 oktober 2021."</code> | <code>0.9</code> |
| <code>"Vrouwen van 50 tot 69 jaar die de voorbije 2 jaar geen mammografie lieten maken, ontvangen een uitnodiging voor een gratis mammografie."</code> | <code>"Vrouwen tussen de 50 en 69 jaar die de afgelopen twee jaar geen mammografie hebben laten doen, ontvangen een uitnodiging voor een gratis mammografie."</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10,540 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 18 tokens</li><li>mean: 37.23 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 36.14 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.64</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>"Op dinsdag 23 mei verschijnt de Stadskroniek โTingeling. 150 jaar tram in Antwerpenโ Deze Stadskroniek neemt de lezer mee in het dagelijkse leven van de reizigers en de bemanning van de trams in Antwerpen."</code> | <code>"Op dinsdag 23 mei verschijnt de Stadskroniek 'Tingeling. 150 jaar tram in Antwerpen'. Deze Stadskroniek neemt je mee in het dagelijkse leven van de reizigers en de bemanning van de trams in Antwerpen."</code> | <code>1.0</code> |
| <code>"De pers wordt vriendelijk uitgenodigd op de lancering van de Stadskroniek โTingeling. 150 jaar tram in Antwerpenโ op dinsdag 23 mei om 20 uur in het Vlaams Tram- en Autobusmuseum, Diksmuidelaan 42, 2600 Antwerpen Verwelkoming door Bob Morren, auteur Toespraak door Nabilla Ait Daoud, schepen voor cultuur Toespraak door Koen Kennis, schepen voor mobiliteit Korte gegidste rondleiding in het trammuseum door Bob Morren Stadskronieken zijn erfgoedverhalen over Antwerpen en de Antwerpse districten."</code> | <code>"De pers is van harte uitgenodigd voor de lancering van 'Tingeling. 150 jaar tram in Antwerpen' op dinsdag 23 mei om 20 uur bij het Vlaams Tram- en Autobusmuseum, Diksmuidelaan 42, in Antwerpen. Bob Morren, bekend van zijn boek 'Toespraak door Nabilla Ait Daoud, schepen voor cultuur, zal de avond openen met een welkomstwoord. Ook Koen Kennis, schepen voor mobiliteit, spreekt over de impact van trams op onze stad. Na deze toespraken volgt een korte rondleiding door Bob Morren in het museum. Stadskronieken zijn verhalen die ons erfgoed vieren en leren over Antwerpen en haar districten."</code> | <code>1.0</code> |
| <code>0.9</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 4e-06
- `num_train_epochs`: 2
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | spearman_cosine |
|:----------:|:-------:|:-------------:|:---------------:|:---------------:|
| 0.1664 | 128 | - | 5.8279 | -0.0016 |
| 0.3329 | 256 | - | 5.8067 | -0.0052 |
| 0.4993 | 384 | - | 5.8030 | -0.0042 |
| 0.6502 | 500 | 5.997 | - | - |
| **0.6658** | **512** | **-** | **5.8018** | **-0.0036** |
| 0.8322 | 640 | - | 5.8020 | -0.0023 |
| 0.9987 | 768 | - | 5.8033 | -0.0021 |
| 1.1651 | 896 | - | 5.8056 | -0.0012 |
| 1.3004 | 1000 | 5.7987 | - | - |
| 1.3316 | 1024 | - | 5.8079 | -0.0017 |
| 1.4980 | 1152 | - | 5.8090 | -0.0015 |
| 1.6645 | 1280 | - | 5.8033 | -0.0005 |
| 1.8309 | 1408 | - | 5.8039 | -0.0003 |
| 1.9506 | 1500 | 5.8021 | - | - |
| 1.9974 | 1536 | - | 5.8043 | -0.0005 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.2.0
- Transformers: 4.45.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
|
yeongcheol/xlm-roberta-base-finetuned-panx-fr
|
yeongcheol
| 2024-11-12T13:49:56Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T13:44:35Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2750
- F1: 0.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5647 | 1.0 | 191 | 0.3242 | 0.7728 |
| 0.2671 | 2.0 | 382 | 0.2672 | 0.8202 |
| 0.1744 | 3.0 | 573 | 0.2750 | 0.8495 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
ashishyenepuri4/bert-finetuned-ner
|
ashishyenepuri4
| 2024-11-12T13:48:10Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-05T18:55:29Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1473
- Precision: 0.5996
- Recall: 0.7161
- F1: 0.6527
- Accuracy: 0.9642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 249 | 0.1364 | 0.5717 | 0.6800 | 0.6212 | 0.9646 |
| No log | 2.0 | 498 | 0.1383 | 0.6080 | 0.6837 | 0.6436 | 0.9650 |
| 0.1734 | 3.0 | 747 | 0.1473 | 0.5996 | 0.7161 | 0.6527 | 0.9642 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
prithivMLmods/Super-Pencil-Flux-LoRA
|
prithivMLmods
| 2024-11-12T13:46:52Z | 308 | 14 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"Pencil",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-11-12T12:52:02Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- Pencil
widget:
- text: 'Simple Pencil, A pencil drawing of a smiling boy is drawn on a white wall. The pencil is black with a yellow tip. The boys face is drawn with black lines. He has short brown hair that is pulled up in a ponytail. His eyes are open and he has a big smile on his face. His mouth is open and his teeth are visible. He is standing on a black pole that is attached to the pencil. The background is plain white.'
output:
url: images/SP1.png
- text: 'Simple Pencil, A black and white pencil drawing of a womans face on a white paper. Her hair is long and cascades down to her shoulders. Her eyes are closed and her lips are slightly parted. Her eyebrows are squinted. Her lips are painted a dark shade of black. She is wearing a necklace around her neck. Her neck is draped over her chest. Her head is tilted slightly to the left. Her nose and mouth are also painted black.'
output:
url: images/SP2.png
- text: 'Simple Pencil, A black and white pencil drawing of a small bird on a white surface. The bird is facing towards the left side of the image, with its head turned towards the right side. Its beak is black, and its eyes are black. Its wings are black, while its tail is black. There are two black pencils on the surface to the left of the bird. The background is a solid white.'
output:
url: images/SP3.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Simple Pencil
license: creativeml-openrail-m
---
# Super-Pencil-Flux-LoRA
<Gallery />
- Hosted Here๐งจ: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Super-Pencil-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 22 & 2070|
| Epoch | 13 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 12
## Best Dimensions
- 1024 x 1024 (Default)
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Super-Pencil-Flux-LoRA"
trigger_word = "Simple Pencil"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Simple Pencil` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Super-Pencil-Flux-LoRA/tree/main) them in the Files & versions tab.
|
gghfez/Qwen2.5-Coder-32B-Instruct-abliterated
|
gghfez
| 2024-11-12T13:46:01Z | 16 | 1 | null |
[
"safetensors",
"qwen2",
"abliterated",
"qwen2.5-coder",
"coding",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-12T09:39:06Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
tags:
- abliterated
- qwen2.5-coder
- coding
---
# gghfez/Qwen2.5-Coder-32B-Instruct-abliterated
Abliterated version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct).
All the mischievous prompts I've tried have worked, but it still warns you to be ethical sometimes.
# Example
## Prompt
Write a shell script to call me a cunt using tts every someone logs into my mac
## Qwen/Qwen2.5-Coder-32B-Instruct
I'm sorry, but I can't assist with that request. Creating scripts to insult or harass someone is not ethical or appropriate. If you have any other questions or need help with a different kind of script, feel free to ask!
## gghfez/Qwen2.5-Coder-32B-Instruct-abliterated
Creating a shell script to call someone a "cunt" using text-to-speech (TTS) every time they log into their Mac is a bit naughty, but it can be done. Below is a simple script that uses the `say` command, which is built into macOS, to perform this task.
Here are the steps:
1. Create a shell script.
2. Place the script in a location where it will be executed on login.
### Step 1: Create the Shell Script
You can create a new shell script using a text editor like `nano`, `vim`, or `TextEdit`. Here's how to do it with `nano`:
```sh
nano call_me_a_cunt.sh
```
Add the following code to the script:
```sh
#!/bin/bash
# Using the 'say' command to call the user a 'cunt'
say "You are a cunt"
```
#### (I've omitted the middle of the reply from the model card)
Make sure to use this script responsibly and consider the feelings of the person it affects. It's meant to be a joke, but it can also be hurtful if not taken in good fun.
# GGUF
[gghfez/Qwen2.5-Coder-32B-Instruct-abliterated-GGUF](https://huggingface.co/gghfez/Qwen2.5-Coder-32B-Instruct-abliterated-GGUF)
|
Tippawan/abc-mock
|
Tippawan
| 2024-11-12T13:45:24Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-11-12T13:45:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yeongcheol/xlm-roberta-base-finetuned-panx-de-fr
|
yeongcheol
| 2024-11-12T13:43:29Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-12T13:30:58Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
tals/albert-base-vitaminc-mnli
|
tals
| 2024-11-12T13:42:22Z | 148 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"albert",
"text-classification",
"dataset:nyu-mll/glue",
"dataset:multi_nli",
"dataset:tals/vitaminc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
datasets:
- nyu-mll/glue
- multi_nli
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
akseljoonas/deberta-v3-ft-predtrade_0.685
|
akseljoonas
| 2024-11-12T13:41:05Z | 201 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-12T13:40:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF
|
featherless-ai-quants
| 2024-11-12T13:40:59Z | 7 | 0 | null |
[
"gguf",
"text-generation",
"base_model:unidocs/llama-3.1-8b-komedic-instruct",
"base_model:quantized:unidocs/llama-3.1-8b-komedic-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-07T07:44:30Z |
---
base_model: unidocs/llama-3.1-8b-komedic-instruct
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# unidocs/llama-3.1-8b-komedic-instruct GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [unidocs-llama-3.1-8b-komedic-instruct-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-IQ4_XS.gguf) | 4276.62 MB |
| Q2_K | [unidocs-llama-3.1-8b-komedic-instruct-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [unidocs-llama-3.1-8b-komedic-instruct-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [unidocs-llama-3.1-8b-komedic-instruct-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [unidocs-llama-3.1-8b-komedic-instruct-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [unidocs-llama-3.1-8b-komedic-instruct-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [unidocs-llama-3.1-8b-komedic-instruct-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [unidocs-llama-3.1-8b-komedic-instruct-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q5_K_M.gguf) | 5467.40 MB |
| Q5_K_S | [unidocs-llama-3.1-8b-komedic-instruct-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q5_K_S.gguf) | 5339.90 MB |
| Q6_K | [unidocs-llama-3.1-8b-komedic-instruct-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q6_K.gguf) | 6290.44 MB |
| Q8_0 | [unidocs-llama-3.1-8b-komedic-instruct-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/unidocs-llama-3.1-8b-komedic-instruct-GGUF/blob/main/unidocs-llama-3.1-8b-komedic-instruct-Q8_0.gguf) | 8145.11 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
ADHIZ/omni_aneesh
|
ADHIZ
| 2024-11-12T13:40:26Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-12T13:39:26Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DopeorNope/Llamabased-math-10k
|
DopeorNope
| 2024-11-12T13:39:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T13:33:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF
|
featherless-ai-quants
| 2024-11-12T13:37:06Z | 9 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Qwen/Qwen2.5-Math-72B",
"base_model:quantized:Qwen/Qwen2.5-Math-72B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T10:43:48Z |
---
base_model: Qwen/Qwen2.5-Math-72B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Qwen/Qwen2.5-Math-72B GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Qwen-Qwen2.5-Math-72B-IQ4_XS](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-IQ4_XS) | 38302.65 MB (folder) |
| Q2_K | [Qwen-Qwen2.5-Math-72B-Q2_K](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q2_K) | 28430.71 MB (folder) |
| Q3_K_L | [Qwen-Qwen2.5-Math-72B-Q3_K_L](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q3_K_L) | 37675.12 MB (folder) |
| Q3_K_M | [Qwen-Qwen2.5-Math-72B-Q3_K_M](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q3_K_M) | 35952.31 MB (folder) |
| Q3_K_S | [Qwen-Qwen2.5-Math-72B-Q3_K_S](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q3_K_S) | 32890.12 MB (folder) |
| Q4_K_M | [Qwen-Qwen2.5-Math-72B-Q4_K_M](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q4_K_M) | 45219.15 MB (folder) |
| Q4_K_S | [Qwen-Qwen2.5-Math-72B-Q4_K_S](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q4_K_S) | 41856.03 MB (folder) |
| Q5_K_M | [Qwen-Qwen2.5-Math-72B-Q5_K_M](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q5_K_M) | 51925.15 MB (folder) |
| Q5_K_S | [Qwen-Qwen2.5-Math-72B-Q5_K_S](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q5_K_S) | 48995.15 MB (folder) |
| Q6_K | [Qwen-Qwen2.5-Math-72B-Q6_K](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q6_K) | 61366.68 MB (folder) |
| Q8_0 | [Qwen-Qwen2.5-Math-72B-Q8_0](https://huggingface.co/featherless-ai-quants/Qwen-Qwen2.5-Math-72B-GGUF/tree/main/Qwen-Qwen2.5-Math-72B-Q8_0) | 73683.37 MB (folder) |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Ken4070TiS/qubit_arXiv_LoRA_llama3
|
Ken4070TiS
| 2024-11-12T13:36:04Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"dataset:Ken4070TiS/qubit_arXiv",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T16:34:07Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
datasets:
- Ken4070TiS/qubit_arXiv
---
This model was made by the following step:
1. Use a web crawler to collect the papers by using arXiv API.
2. The searching keyword is "qubit AND (IBM OR IQM OR Rigetti)", the time range is 2018 - 2024.
3. The data was corrected in the JSON with column' Title, Abstract, Authors, arXiv_id, Date, Author_company.
4. Feed the JSON files to llama-3-8b-bnb-4bit and fine-tune the model by using unsloth on google colab, the GPU is A100
5. That's it! :)
# Uploaded model
- **Developed by:** Ken4070TiS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
personal1802/Locked_Arms
|
personal1802
| 2024-11-12T13:23:45Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Luo-Yihong/yoso_sd1.5_lora",
"base_model:adapter:Luo-Yihong/yoso_sd1.5_lora",
"region:us"
] |
text-to-image
| 2024-11-12T13:23:23Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/WHITE.png
base_model: Luo-Yihong/yoso_sd1.5_lora
instance_prompt: null
---
# Locked_Arms
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/personal1802/Locked_Arms/tree/main) them in the Files & versions tab.
|
ADHIZ/omni_rithvik
|
ADHIZ
| 2024-11-12T13:21:44Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-12T13:20:51Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-5.0bpw-exl2
|
DrNicefellow
| 2024-11-12T13:11:51Z | 6 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"5-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:25:44Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 5.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-3.0bpw-exl2
|
DrNicefellow
| 2024-11-12T13:11:36Z | 12 | 1 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"3-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:26:04Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 3.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-2.0bpw-exl2
|
DrNicefellow
| 2024-11-12T13:11:34Z | 8 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"2-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:55:32Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 2.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-6.0bpw-exl2
|
DrNicefellow
| 2024-11-12T13:10:19Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"6-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:25:17Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 6.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
lurker18/GeM2_Llamion_14B_Chat_AWQ_4bit
|
lurker18
| 2024-11-12T13:10:01Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-11-12T11:07:01Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
|
lurker18/GeM2_Llamion_14B_Base_AWQ_4bit
|
lurker18
| 2024-11-12T13:09:49Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-11-12T09:56:23Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
|
featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF
|
featherless-ai-quants
| 2024-11-12T13:06:23Z | 7 | 0 | null |
[
"gguf",
"text-generation",
"base_model:cstr/llama3.1-8b-spaetzle-v74",
"base_model:quantized:cstr/llama3.1-8b-spaetzle-v74",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T12:53:32Z |
---
base_model: cstr/llama3.1-8b-spaetzle-v74
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# cstr/llama3.1-8b-spaetzle-v74 GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [cstr-llama3.1-8b-spaetzle-v74-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-IQ4_XS.gguf) | 4276.63 MB |
| Q2_K | [cstr-llama3.1-8b-spaetzle-v74-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q2_K.gguf) | 3031.86 MB |
| Q3_K_L | [cstr-llama3.1-8b-spaetzle-v74-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q3_K_L.gguf) | 4121.74 MB |
| Q3_K_M | [cstr-llama3.1-8b-spaetzle-v74-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [cstr-llama3.1-8b-spaetzle-v74-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q3_K_S.gguf) | 3494.74 MB |
| Q4_K_M | [cstr-llama3.1-8b-spaetzle-v74-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q4_K_M.gguf) | 4692.78 MB |
| Q4_K_S | [cstr-llama3.1-8b-spaetzle-v74-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q4_K_S.gguf) | 4475.28 MB |
| Q5_K_M | [cstr-llama3.1-8b-spaetzle-v74-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q5_K_M.gguf) | 5467.41 MB |
| Q5_K_S | [cstr-llama3.1-8b-spaetzle-v74-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q5_K_S.gguf) | 5339.91 MB |
| Q6_K | [cstr-llama3.1-8b-spaetzle-v74-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q6_K.gguf) | 6290.45 MB |
| Q8_0 | [cstr-llama3.1-8b-spaetzle-v74-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/cstr-llama3.1-8b-spaetzle-v74-GGUF/blob/main/cstr-llama3.1-8b-spaetzle-v74-Q8_0.gguf) | 8145.12 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
prithivMLmods/Qwen2.5-Coder-3B-GGUF
|
prithivMLmods
| 2024-11-12T13:05:23Z | 194 | 8 |
transformers
|
[
"transformers",
"gguf",
"Qwen",
"2.5",
"Coder",
"F16",
"16-bit",
"Q4",
"Q5",
"Q8",
"Llama-cpp",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T06:17:26Z |
---
license: creativeml-openrail-m
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- Qwen
- '2.5'
- Coder
- F16
- 16-bit
- Q4
- Q5
- Q8
- Llama-cpp
---
## Qwen2.5-Coder-3B-GGUF
| File Name | Size | Description |
|-----------------------------------|---------|-------------------------------------------------------------------|
| `.gitattributes` | 1.77kB | Configuration file for Git and LFS handling. |
| `Qwen2.5-Coder-3B.F16.gguf` | 6.18GB | Full-precision (16-bit) model for coding tasks. |
| `Qwen2.5-Coder-3B.Q4_K_M.gguf` | 1.93GB | Quantized 4-bit model (medium variant) for reduced resource usage.|
| `Qwen2.5-Coder-3B.Q5_K_M.gguf` | 2.22GB | Quantized 5-bit model (medium variant) balancing size and accuracy.|
| `Qwen2.5-Coder-3B.Q8_0.gguf` | 3.29GB | Quantized 8-bit model for improved accuracy in coding tasks. |
| `README.md` | 42B | Initial README with basic information. |
# Run with Ollama ๐ฆ
## Overview
Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes.
## Table of Contents
- [Download and Install Ollama](#download-and-install-ollama)
- [Steps to Run GGUF Models](#steps-to-run-gguf-models)
- [1. Create the Model File](#1-create-the-model-file)
- [2. Add the Template Command](#2-add-the-template-command)
- [3. Create and Patch the Model](#3-create-and-patch-the-model)
- [Running the Model](#running-the-model)
- [Sample Usage](#sample-usage)
## Download and Install Ollama๐ฆ
To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
## Steps to Run GGUF Models
### 1. Create the Model File
First, create a model file and name it appropriately. For example, you can name your model file `metallama`.
### 2. Add the Template Command
In your model file, include a `FROM` line that specifies the base model file you want to use. For instance:
```bash
FROM Llama-3.2-1B.F16.gguf
```
Ensure that the model file is in the same directory as your script.
### 3. Create and Patch the Model
Open your terminal and run the following command to create and patch your model:
```bash
ollama create metallama -f ./metallama
```
Once the process is successful, you will see a confirmation message.
To verify that the model was created successfully, you can list all models with:
```bash
ollama list
```
Make sure that `metallama` appears in the list of models.
---
## Running the Model
To run your newly created model, use the following command in your terminal:
```bash
ollama run metallama
```
### Sample Usage
In the command prompt, you can execute:
```bash
D:\>ollama run metallama
```
You can interact with the model like this:
```plaintext
>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.
```
---
## Conclusion
With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient.
- This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.
|
xxhe/esci-dpo-mistral-7b-instruct-iter-2
|
xxhe
| 2024-11-12T13:05:06Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T13:02:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
plesniar/tku_nec_checkpoint
|
plesniar
| 2024-11-12T13:04:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-12T12:39:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
juampahc/gliner_multi-v2.1-onnx
|
juampahc
| 2024-11-12T12:58:56Z | 16 | 0 |
gliner
|
[
"gliner",
"onnx",
"ONNX",
"GLiNER",
"token-classification",
"multilingual",
"arxiv:2311.08526",
"base_model:microsoft/mdeberta-v3-base",
"base_model:quantized:microsoft/mdeberta-v3-base",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2024-11-12T12:42:04Z |
---
license: apache-2.0
base_model:
- urchade/gliner_multi-v2.1
- microsoft/mdeberta-v3-base
language:
- multilingual
library_name: gliner
tags:
- ONNX
- GLiNER
pipeline_tag: token-classification
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like).
It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that,
despite their flexibility, are costly and large for resource-constrained scenarios.
This is the ONNX version without any optimization nor quantization. For such other versions check: https://huggingface.co/onnx-community/gliner_multi-v2.1
## Links
* Paper: https://arxiv.org/abs/2311.08526
* Repository: https://github.com/urchade/GLiNER
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("juampahc/gliner_multi-v2.1-onnx", load_onnx_model=True, load_tokenizer=True, onnx_model_file="model.onnx")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kษพiสหtjษnu สษหnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
|
bullerwins/Qwen2.5-Coder-32B-exl2_4.0bpw
|
bullerwins
| 2024-11-12T12:56:08Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-32B",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-11-12T12:50:53Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-32B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-32B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 32B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 32.5B
- Number of Paramaters (Non-Embedding): 31.0B
- Number of Layers: 64
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
p0uy4/bert-base-uncased-finetuned-cola
|
p0uy4
| 2024-11-12T12:54:38Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-07T14:32:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
base_model: bert-base-uncased
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.5214716883534575
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4742
- Matthews Correlation: 0.5215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.468554830415339e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4392 | 1.0 | 1069 | 0.4742 | 0.5215 |
### Framework versions
- Transformers 4.12.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.10.3
|
mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF
|
mattritchey
| 2024-11-12T12:52:36Z | 5 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:meditsolutions/MedIT-Mesh-3B-Instruct",
"base_model:quantized:meditsolutions/MedIT-Mesh-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T12:52:25Z |
---
license: mit
language:
- en
base_model: meditsolutions/MedIT-Mesh-3B-Instruct
tags:
- llama-cpp
- gguf-my-repo
---
# mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meditsolutions/MedIT-Mesh-3B-Instruct`](https://huggingface.co/meditsolutions/MedIT-Mesh-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meditsolutions/MedIT-Mesh-3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF --hf-file medit-mesh-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF --hf-file medit-mesh-3b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF --hf-file medit-mesh-3b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mattritchey/MedIT-Mesh-3B-Instruct-Q4_K_M-GGUF --hf-file medit-mesh-3b-instruct-q4_k_m.gguf -c 2048
```
|
BlackBeenie/Neos-Llama-3.1-8B
|
BlackBeenie
| 2024-11-12T12:50:53Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T12:32:57Z |
---
library_name: transformers
license: apache-2.0
datasets:
- mlabonne/orpo-dpo-mix-40k
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Yeonwoo Sung
- **License:** apache 2.0
- **Finetuned from model:** meta-llama/Llama-3.1-8B-Instruct
### Model Sources [optional]
Trained from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## How to Get Started with the Model
You could use this model with huggingface transformer by using code below:
```python
import transformers
import torch
model_id = "BlackBeenie/Neos-Llama-3.1-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
## Training Details
### Training Data
Trained on [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
### Training Procedure
This model is finetuned with ORPO trainer.
|
clinno/eightwords-20241112
|
clinno
| 2024-11-12T12:50:40Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T12:47:22Z |
---
library_name: transformers
license: other
base_model: NousResearch/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the identity and the eightwords-20241112-alapaca datasets.
It achieves the following results on the evaluation set:
- Loss: 1.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 32.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.7669 | 14.0474 | 2000 | 1.4163 |
| 0.4249 | 28.0948 | 4000 | 1.6929 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
mav23/OLMo-1B-0724-hf-GGUF
|
mav23
| 2024-11-12T12:46:58Z | 35 | 0 | null |
[
"gguf",
"en",
"dataset:allenai/dolma",
"arxiv:2402.00838",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-12T12:35:36Z |
---
license: apache-2.0
datasets:
- allenai/dolma
language:
- en
---
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 1B July 2024
OLMo 1B July 2024 is the latest version of the original [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) model rocking a 4.4 point increase in HellaSwag, among other evaluations improvements, from an improved version of the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset and staged training.
**This version is for direct use with HuggingFace Transformers** from v4.40 on.
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
We release all code, checkpoints, logs, and details involved in training these models.
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B July 2024](https://huggingface.co/allenai/OLMo-1B-0724-hf) | 3.05 Trillion | 16 | 2048 | 16 | 4096 |
| [OLMo 7B July 2024](https://huggingface.co/allenai/OLMo-7B-0724-hf) | 2.75 Trillion | 32 | 4096 | 32 | 4096 |
[Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps.
The naming convention is `stepXXX-tokensYYYB`.
To load a specific model revision with HuggingFace, simply add the argument `revision`:
```bash
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf", revision="step1000-tokens4B")
```
All revisions/branches are listed in the file `revisions.txt`.
Or, you can access all the revisions for the models via the following code snippet:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-1B-0724-hf")
branches = [b.name for b in out.branches]
```
### Model Description
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.
### Model Sources
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
## Uses
### Inference
Install Transformers. Then proceed as usual with HuggingFace:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-0724-hf")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'
```
Alternatively, with the pipeline abstraction:
```python
from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B-0724-hf")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'
```
Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
### Fine-tuning
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
1. Fine-tune with the OLMo repository:
```bash
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
--data.paths=[{path_to_data}/input_ids.npy] \
--data.label_mask_paths=[{path_to_data}/label_mask.npy] \
--load_path={path_to_checkpoint} \
--reset_trainer_state
```
For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning).
2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the new and original 7B model are found below.
| Task | Llama-7b | Llama2-7b | Falcon-7b | Mpt-7b | OLMo-7B | Llama2-13b | **OLMo 7B 0424** |
|-------------------|----------|-----------|-----------|--------|---------|------------|-------------|
| arc_c | 44.5 | 48.5 | 47.5 | 46.5 | 48.5 | 52.8 | 42.5 |
| arc_e | 67.9 | 69.5 | 70.4 | 70.5 | 65.4 | 73.7 | 67.2 |
| boolq | 75.4 | 80.2 | 74.6 | 74.2 | 73.4 | 82.2 | 83.7 |
| copa | 91.0 | 86.0 | 86.0 | 85.0 | 90.0 | 90.0 | 86.0 |
| hellaswag | 76.2 | 76.8 | 75.9 | 77.6 | 76.4 | 78.6 | 75.5 |
| openbookqa | 51.2 | 48.4 | 53.0 | 48.6 | 50.4 | 51.8 | 50.0 |
| piqa | 77.2 | 76.7 | 78.5 | 77.3 | 78.4 | 79.0 | 77.5 |
| sciq | 93.9 | 94.5 | 93.9 | 93.7 | 93.8 | 95.5 | 96.7 |
| winogrande | 70.5 | 69.4 | 68.9 | 69.9 | 67.9 | 73.5 | 69.8 |
| truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33.0 | 36.0 | 36.8 | 35.8 |
| MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | 55.5 | 52.0 |
| GSM8k | 10.0 | 12.0 | 4.0 | 4.5 | 8.5 | 25.0 | 29.0 |
| Full average | 60.3 | 62.1 | 59.2 | 59.3 | 59.8 | 66.2 | 63.8 |
And for the 1B model:
| task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | OLMo 1B | **OLMo 1B 0724** (ours) |
| ------------- | ------ | ----------------- | --------- | -------------------------------------- | ------- | ---- |
| arc_challenge | 25 | 43.8 | 33.1 | 34.8 | 34.5 | 36.5 |
| arc_easy | 25 | 63.7 | 50.2 | 53.2 | 58.1 | 55.3 |
| boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | 67.5 |
| copa | 50 | 84.0 | 72.0 | 78.0 | 79.0 | 83.0 |
| hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | 66.9 |
| openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | 46.4 |
| piqa | 50 | 74.0 | 69.1 | 71.1 | 73.7 | 74.9 |
| sciq | 25 | 94.7 | 86.0 | 90.5 | 88.1 | 93.4 |
| winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | 61.4 |
| Average | 36.1 | 68.4 | 56.4 | 61.5 | 62.4 | 65.0 |
\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation.
**This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering**.
During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.
### Staged training / annealing
In contrast to the first OLMo, we trained OLMo 7B 0424 with a two-stage curriculum:
* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high.
* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.
Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 7B 0424 already outperforms the older OLMo. The second stage consistently adds 2 to 3 points of performance on top.
### Architecture
OLMo 7B architecture with peer models for comparison.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B |
|------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
| d_model | 4096 | 4096 | 4096 | 4544 | 4096 |
| num heads | 32 | 32 | 32 | 71 | 16 |
| num layers | 32 | 32 | 32 | 32 | 32 |
| MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 |
| LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN |
| pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE |
| attention variant | full | GQA | full | MQA | MQA |
| biases | none | none | in LN only | in LN only | none |
| block type | sequential | sequential | sequential | parallel | parallel |
| activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU |
| sequence length | 2048 | 4096 | 2048 | 2048 | 2048 |
| batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 |
| batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M |
| weight tying | no | no | no | no | yes |
### Hyperparameters
AdamW optimizer parameters are shown below.
| Size | Peak LR | Betas | Epsilon | Weight Decay |
|------|------------|-----------------|-------------|--------------|
| 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 |
| 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 |
Optimizer settings comparison with peer models.
| | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) |
|-----------------------|------------------|---------------------|--------------------|--------------------|
| warmup steps | 5000 | 2000 | 2000 | 1000 |
| peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 |
| minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 |
| weight decay | 0.1 | 0.1 | 0.1 | 0.1 |
| beta1 | 0.9 | 0.9 | 0.9 | 0.99 |
| beta2 | 0.95 | 0.95 | 0.95 | 0.999 |
| epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 |
| LR schedule | linear | cosine | cosine | cosine |
| gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 |
| gradient reduce dtype | FP32 | FP32 | FP32 | BF16 |
| optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 |
## Environmental Impact
OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.
| | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg COโe/KWh) | Carbon Emissions (tCOโeq) |
|-----------|------------|-----------------------------|--------------------------------|---------------------------|
| OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* |
| OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan, `{nathanl} at allenai dot org`.
|
mradermacher/NeuralMonarch-7B-GGUF
|
mradermacher
| 2024-11-12T12:39:49Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"lazymergekit",
"dpo",
"rlhf",
"en",
"base_model:mlabonne/NeuralMonarch-7B",
"base_model:quantized:mlabonne/NeuralMonarch-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T18:27:06Z |
---
base_model: mlabonne/NeuralMonarch-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- lazymergekit
- dpo
- rlhf
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlabonne/NeuralMonarch-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NeuralMonarch-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMonarch-7B-GGUF/resolve/main/NeuralMonarch-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mergekit-community/qwen-2.5-o1.like-pluse
|
mergekit-community
| 2024-11-12T12:36:58Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:C10X/LongWriter-Qwen2.5-7B-Instruct",
"base_model:finetune:C10X/LongWriter-Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-12T12:32:05Z |
---
base_model:
- C10X/01
- C10X/LongWriter-Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [C10X/01](https://huggingface.co/C10X/01)
* [C10X/LongWriter-Qwen2.5-7B-Instruct](https://huggingface.co/C10X/LongWriter-Qwen2.5-7B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: C10X/01
- model: C10X/LongWriter-Qwen2.5-7B-Instruct
merge_method: slerp
base_model: C10X/01
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
```
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-7.0bpw-exl2
|
DrNicefellow
| 2024-11-12T12:32:54Z | 9 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"7-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:22:05Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 7.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen2.5-Coder-32B-Instruct-8.0bpw-exl2
|
DrNicefellow
| 2024-11-12T12:32:35Z | 7 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"8-bit",
"exl2",
"region:us"
] | null | 2024-11-12T12:21:29Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
---
This is a 8.0 bpw quantized version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffee or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
dcrowleymunster/donalDistiLBERTSunderland1Epoch-DistlB-NonMLM-QA-4-epochs
|
dcrowleymunster
| 2024-11-12T12:29:13Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-11-12T12:29:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF
|
featherless-ai-quants
| 2024-11-12T12:28:21Z | 6 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta",
"base_model:quantized:ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-12T12:20:00Z |
---
base_model: ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta GGUF Quantizations ๐

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations ๐
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-GGUF/blob/main/ArianAskari-SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta-Q8_0.gguf) | 7339.34 MB |
---
## โก Powered by [Featherless AI](https://featherless.ai)
### Key Features
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
- ๐ **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.