modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 12:28:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 12:28:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mingineer/Qwen2.5-Coder-7B-Instruct-Q4-mlx
|
mingineer
| 2024-11-01T09:11:21Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2024-11-01T09:11:04Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
- mlx-my-repo
---
# mingineer/Qwen2.5-Coder-7B-Instruct-Q4-mlx
The Model [mingineer/Qwen2.5-Coder-7B-Instruct-Q4-mlx](https://huggingface.co/mingineer/Qwen2.5-Coder-7B-Instruct-Q4-mlx) was converted to MLX format from [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mingineer/Qwen2.5-Coder-7B-Instruct-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/Breeze-7B-Cantonese-v0.1-GGUF
|
mradermacher
| 2024-11-01T09:10:32Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"cantonese",
"yue",
"hong kong",
"香港",
"廣東話",
"粵語",
"zh",
"en",
"dataset:hon9kon9ize/yue-alpaca",
"dataset:indiejoseph/wikipedia-translate-zhhk-zhcn",
"dataset:indiejoseph/wikipedia-zh-yue-summaries",
"dataset:indiejoseph/wikipedia-zh-yue-qa",
"base_model:kennylam/Breeze-7B-Cantonese-v0.1",
"base_model:quantized:kennylam/Breeze-7B-Cantonese-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T08:55:47Z |
---
base_model: kennylam/Breeze-7B-Cantonese-v0.1
datasets:
- hon9kon9ize/yue-alpaca
- indiejoseph/wikipedia-translate-zhhk-zhcn
- indiejoseph/wikipedia-zh-yue-summaries
- indiejoseph/wikipedia-zh-yue-qa
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- cantonese
- yue
- hong kong
- 香港
- 廣東話
- 粵語
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kennylam/Breeze-7B-Cantonese-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-7B-Cantonese-v0.1-GGUF/resolve/main/Breeze-7B-Cantonese-v0.1.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF
|
mradermacher
| 2024-11-01T09:00:11Z | 16 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T14:41:39Z |
---
base_model: LeroyDyer/SpydazWeb_AI_HumanAI_Medical_Coder_001
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_HumanAI_Medical_Coder_001
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF
|
mradermacher
| 2024-11-01T09:00:10Z | 72 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-01T07:49:10Z |
---
base_model: LeroyDyer/SpydazWeb_AI_HumanAI_Medical_Coder_001
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_HumanAI_Medical_Coder_001
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_HumanAI_Medical_Coder_001-i1-GGUF/resolve/main/SpydazWeb_AI_HumanAI_Medical_Coder_001.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Xu-Ouyang/pythia-12b-deduped-int3-step4-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T08:52:11Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T08:41:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhishkgoel/PEFT_expo
|
abhishkgoel
| 2024-11-01T08:40:17Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T08:34:45Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: PEFT_expo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PEFT_expo
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8694 | 1.0 | 19 | 0.2720 |
| 0.1839 | 2.0 | 38 | 0.2656 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
hidonbush/paper-cutting
|
hidonbush
| 2024-11-01T08:34:36Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"en",
"zh",
"dataset:hidonbush/paper-cuttingv0.1",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"endpoints_compatible",
"region:us"
] | null | 2024-10-30T07:26:22Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: paper-cutting
results: []
datasets:
- hidonbush/paper-cuttingv0.1
language:
- en
- zh
metrics:
- accuracy
base_model:
- nvidia/mit-b5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paper-cutting
This model was a finetuned version of nvidia/mit-b5 on the paper-cutting datasetv0.1.
It was trained to extract body contents from any resources like articles and books, just like cutting them off the paper.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
paper-cutting v0.1
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
life/retrofuturereality
|
life
| 2024-11-01T08:27:10Z | 18 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T08:27:03Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe retrofuturereality
output:
url: samples/1730449588335__000001000_0.jpg
- text: a white spaceship in the middle of a space station, with a watermark in
the top right corner. The spaceship appears to be in the process of being
built, as evidenced by the various tools and materials scattered around it.
retrofuturereality
output:
url: samples/1730449604541__000001000_1.jpg
- text: a man and woman standing next to each other in a room, smiling. The woman
is wearing a necklace and the man is wearing formal dress. In the background,
there are a number of people and lights retrofuturereality
output:
url: samples/1730449620769__000001000_2.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: retrofuturereality
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# retrofuturereality
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `retrofuturereality` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/life/retrofuturereality/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('life/retrofuturereality', weight_name='retrofuturereality.safetensors')
image = pipeline('A person in a bustling cafe retrofuturereality').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
allknowingroger/Qwen2.5-7B-task4
|
allknowingroger
| 2024-11-01T08:23:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:KPEP/krx-qwen-2.5-7b-v1.4.2",
"base_model:merge:KPEP/krx-qwen-2.5-7b-v1.4.2",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:Tsunami-th/Tsunami-0.5x-7B-Instruct",
"base_model:merge:Tsunami-th/Tsunami-0.5x-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T08:19:20Z |
---
base_model:
- KPEP/krx-qwen-2.5-7b-v1.4.2
- Tsunami-th/Tsunami-0.5x-7B-Instruct
- Qwen/Qwen2.5-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [KPEP/krx-qwen-2.5-7b-v1.4.2](https://huggingface.co/KPEP/krx-qwen-2.5-7b-v1.4.2)
* [Tsunami-th/Tsunami-0.5x-7B-Instruct](https://huggingface.co/Tsunami-th/Tsunami-0.5x-7B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Tsunami-th/Tsunami-0.5x-7B-Instruct
parameters:
weight: 1.0
- model: KPEP/krx-qwen-2.5-7b-v1.4.2
parameters:
weight: 1.0
merge_method: task_arithmetic
base_model: Qwen/Qwen2.5-7B
parameters:
normalize: true
dtype: bfloat16
```
|
gitgato/tessy-LoRA
|
gitgato
| 2024-11-01T08:20:26Z | 46 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-25T01:56:33Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: photo of tessy a beautiful woman
parameters:
negative_prompt: Low quality
output:
url: images/Imagen de WhatsApp 2024-09-24 a las 13.59.54_6e906e0c.jpg
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: tessy
license: mit
---
# tessy-LoRA
<Gallery />
## Model description
Janesde

## Trigger words
You should use `tessy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gitgato/tessy-LoRA/tree/main) them in the Files & versions tab.
|
prkhar05/pixart-personal-model-msteps
|
prkhar05
| 2024-11-01T08:11:35Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:PixArt-alpha/PixArt-XL-2-512x512",
"base_model:adapter:PixArt-alpha/PixArt-XL-2-512x512",
"region:us"
] | null | 2024-11-01T06:31:51Z |
---
base_model: PixArt-alpha/PixArt-XL-2-512x512
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
minhdang/bge-base-financial-matryoshka_pass_2
|
minhdang
| 2024-11-01T08:10:57Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:107510",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-01T08:10:37Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:107510
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
widget:
- source_sentence: '[''Hình thức xử phạt và thời hiệu xử phạt vi phạm hành chính\n...\n4.
Thời hiệu xử phạt vi phạm hành chính đối với lĩnh vực kinh doanh xổ số:\na) Thời
hiệu xử phạt vi phạm hành chính trong lĩnh vực kinh doanh xổ số là 01 năm.\nb)
Đối với hành vi vi phạm hành chính trong lĩnh vực kinh doanh xổ số đang được thực
hiện thì thời hiệu được tính từ ngày người có thẩm quyền thi hành công vụ phát
hiện hành vi vi phạm. Đối với hành vi vi phạm hành chính đã kết thúc thì thời
hiệu được tính từ ngày chấm dứt hành vi vi phạm. Thời điểm chấm dứt hành vi vi
phạm để tính thời hiệu xử phạt đối với một số hành vi vi phạm tại Chương 3 Nghị
định này được quy định như sau:\n- Đối với hành vi sửa chữa, tẩy xoá làm thay
đổi nội dung Giấy chứng nhận đủ điều kiện kinh doanh, các tài liệu trong hồ sơ
đã được làm đại lý xổ số quy định tại khoản 1 Điều 35 và khoản 1 Điều 41 Nghị
định này nếu không xác định được ngày sửa chữa, tẩy xoá làm thay đổi nội dung
Giấy chứng nhận đủ điều kiện kinh doanh, các tài liệu trong hồ sơ đã được làm
đại lý xổ số thì thời điểm chấm dứt hành vi vi phạm là ngày phát hiện Giấy chứng
nhận đủ điều kiện kinh doanh bị sửa chữa, tẩy xóa làm thay đổi nội dung;\n- Đối
với hành vi không xây dựng và ban hành quy chế quy định chi tiết quy trình tổ
chức thu hồi vé xổ số không tiêu thụ hết, không xây dựng và công bố công khai
thể lệ quay số mở thưởng, không ban hành Quy chế quản lý, khai thác dữ liệu máy
chủ kinh doanh xổ số điện toán quy định tại khoản 1 Điều 40, khoản 1 Điều 44 và
khoản 1 Điều 49 Nghị định này, thời điểm chấm dứt hành vi vi phạm là ngày thực
hiện ban hành quy chế quy định chi tiết quy trình tổ chức thu hồi vé xổ số không
tiêu thụ hết, công bố công khai thể lệ quay số mở thưởng, ban hành Quy chế quản
lý, khai thác dữ liệu máy chủ kinh doanh xổ số điện toán;\n- Đối với hành vi vi
phạm quy định về chế độ báo cáo quy định tại Điều 51 Nghị định này, thời điểm
chấm dứt hành vi vi phạm là ngày thực hiện báo cáo.'']'
sentences:
- Hình thức đấu giá bằng bỏ phiếu gián tiếp được pháp luật quy định như thế nào?
- Thường trực Hội đồng tư vấn đặc xá là cơ quan nào?
- Thời hiệu xử phạt cơ sở kinh doanh xổ số phát hành vé xổ số quá hạn mức là bao
lâu?
- source_sentence: "['Thanh lý hợp đồng thực hiện nhiệm vụ\\nCăn cứ Hồ sơ đề nghị\
\ nghiệm thu, thanh lý hợp đồng thực hiện nhiệm vụ của cơ quan chủ trì thực hiện,\
\ việc thanh lý hợp đồng đã ký kết trong thời hạn 10 ngày được thực hiện kể từ\
\ ngày cơ quan quản lý nhiệm vụ tiếp nhận đầy đủ sản phẩm đã được chỉnh sửa theo\
\ ý kiến của Hội đồng nghiệm thu nhiệm vụ cấp Bộ.\\nĐối với các nhiệm vụ thường\
\ xuyên hàng năm quy định tại điểm b, điểm h, điểm k khoản 1 Điều 3 Thông tư này\
\ được cơ quan quản lý nhiệm vụ xác nhận hoàn thành thì văn bản xác nhận hoàn\
\ thành nhiệm vụ là căn cứ nghiệm thu, thanh lý nhiệm vụ của cơ quan chủ trì thực\
\ hiện.\\nBiên bản nghiệm thu và thanh lý hợp đồng đối với các nhiệm vụ ký hợp\
\ đồng thực hiện theo mẫu B3a-HĐMT được quy định tại mẫu B6a-BBTLMT. Biên bản\
\ nghiệm thu và thanh lý hợp đồng đối với các nhiệm vụ ký hợp đồng thực hiện theo\
\ mẫu B3b-HĐBĐKH được quy định tại mẫu B6b-BBTLBĐKH.'\n 'Thanh lý hợp đồng nhiệm\
\ vụ bảo vệ môi trường\\nCăn cứ Biên bản nghiệm thu kết quả thực hiện nhiệm vụ\
\ bảo vệ môi trường, việc thanh lý hợp đồng đã ký kết với đơn vị chủ trì trong\
\ thời hạn 10 ngày được thực hiện kể từ ngày tiếp nhận đầy đủ sản phẩm đã được\
\ chỉnh sửa theo ý kiến của Hội đồng nghiệm thu nhiệm vụ bảo vệ môi trường. Biên\
\ bản thanh lý hợp đồng được quy định tại mẫu B6a-BBTLHĐ-BCT.']"
sentences:
- Tổn thương gân chày trước chủ yếu gặp trong các vết thương ở vùng nào?
- Hội đồng Lý luận Trung ương họp mỗi quý mấy lần?
- Thời hạn thanh lý hợp đồng nhiệm vụ bảo vệ môi trường ngành Công thương sử dụng
nguồn kinh phí sự nghiệp môi trường là bao lâu?
- source_sentence: '[''Đối tượng áp dụng\n1. Cán bộ, công chức của các đơn vị thuộc
Ủy ban Dân tộc được Bộ trưởng, Chủ nhiệm Ủy ban Dân tộc (sau đây gọi tắt là Bộ
trưởng, Chủ nhiệm) giao nhiệm vụ hoặc phân công làm nhiệm vụ tiếp công dân, xử
lý đơn khiếu nại, tố cáo, kiến nghị, phản ánh tại trụ sở và các địa điểm tiếp
công dân thuộc Ủy ban Dân tộc.\n2. Bộ trưởng, Chủ nhiệm, các Thứ trưởng, Phó Chủ
nhiệm Ủy ban Dân tộc có trách nhiệm tiếp công dân định kỳ hoặc đột xuất; công
chức trong các đơn vị thuộc Ủy ban Dân tộc được Bộ trưởng, Chủ nhiệm triệu tập
làm nhiệm vụ tiếp công dân, xử lý đơn khiếu nại, tố cáo, kiến nghị, phản ánh tại
trụ sở và các địa điểm tiếp công dân thuộc Ủy ban Dân tộc.\n3. Công chức, người
tham gia tiếp công dân thuộc Ủy ban Dân tộc được Bộ trưởng, Chủ nhiệm giao nhiệm
vụ hoặc phân công phối hợp tiếp công dân, giữ gìn an ninh, trật tự, bảo đảm y
tế tại trụ sở và các địa điểm tiếp công dân của Ủy ban Dân tộc.\n4. Cán bộ, công
chức của các tổ chức thuộc Ủy ban Dân tộc được Bộ trưởng, Chủ nhiệm giao nhiệm
vụ chuyên trách xử lý đơn khiếu nại, tố cáo, kiến nghị, phản ánh.'']'
sentences:
- Công chức của đơn vị có được hưởng chế độ bồi dưỡng khi nhận nhiệm vụ tiếp công
dân tại các địa điểm tiếp công dân thuộc Ủy ban Dân tộc hay không?
- Người trúng xổ số Vietlott có được bảo mật thông tin trước đại chúng?
- Việc công bố giá trị doanh nghiệp được cơ quan đại diện chủ sở hữu thực hiện trong
thời hạn bao nhiêu ngày? Kể từ thời điểm nào?
- source_sentence: '[''Hình thức tổ chức, nội dung và chương trình đào tạo nghiệp
vụ thẩm định giá\n1. Khóa đào tạo nghiệp vụ thẩm định giá được tổ chức tập trung
một kỳ liên tục hoặc nhiều kỳ nhưng không kéo dài quá 3 (ba) tháng cho một khóa
học và phải đảm bảo dạy và học đủ thời lượng, nội dung và chương trình theo quy
định tại khoản 2 Điều này.\n...'']'
sentences:
- Thời gian áp dụng biện pháp cách ly y tế được pháp luật quy định như thế nào?
- Khi thực hiện khuyến mại cung ứng dịch vụ thông tin di động mẫu để khách hàng
dùng thử không phải trả tiền, doanh nghiệp viễn thông có cần đăng ký khuyến mại
không?
- Một khóa đào tạo nghiệp vụ thẩm định giá kéo dài bao lâu?
- source_sentence: '[''Tiêu chuẩn Chi cục trưởng, Phó Chi cục trưởng thuộc Cục Thuế\n1.
Vị trí và nhiệm vụ\na) Chi cục trưởng Chi cục Thuế là người đứng đầu Chi cục Thuế,
chịu trách nhiệm trước Cục trưởng Cục Thuế và trước pháp luật về toàn bộ hoạt
động nhiệm vụ của đơn vị được cấp có thẩm quyền giao nhiệm vụ quản lý nhà nước
trên địa bàn quận, huyện, thị xã, thành phố thuộc tỉnh.\nb) Phó Chi cục trưởng
Chi cục Thuế là người giúp việc Chi cục trưởng, chịu trách nhiệm trước Chi cục
trưởng và trước pháp luật về lĩnh vực công tác được phân công; thay mặt Chi cục
trưởng điều hành, giải quyết các công việc của Chi cục khi được Chi cục trưởng
ủy quyền, giao nhiệm vụ.'']'
sentences:
- Nhiệm vụ của Chi cục trưởng thuộc Cục Thuế như thế nào?
- Việc đánh giá chất lượng dịch vụ sự nghiệp công về xây dựng cơ sở dữ liệu được
thực hiện theo phương thức nào?
- Khoản phụ cấp chuyên cần có tính vào lương để tính tiền lương tăng ca, lương làm
thêm giờ hay không?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.26527708019420726
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4377197388247112
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5174116859199732
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6099112673698309
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26527708019420726
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14590657960823708
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10348233718399463
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.060991126736983085
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26527708019420726
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4377197388247112
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5174116859199732
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6099112673698309
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4285225723707542
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.37149118785859175
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.38082252053876386
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.26586305039343716
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.43227858697471955
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5082872928176796
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6015402645236899
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26586305039343716
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1440928623249065
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1016574585635359
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06015402645236899
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26586305039343716
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.43227858697471955
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5082872928176796
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6015402645236899
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4244877080296015
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36887667785457956
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3780890557065138
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.2483676544450025
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4107651096601373
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4801607232546459
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5700652938222
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2483676544450025
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.13692170322004574
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09603214465092917
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05700652938221999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2483676544450025
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4107651096601373
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4801607232546459
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5700652938222
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.40061709420771235
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.34734958105124125
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.35675125361493826
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.22141302528042858
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3701657458563536
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4385568391093253
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5179976561192031
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22141302528042858
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12338858195211787
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08771136782186506
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.051799765611920304
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.22141302528042858
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3701657458563536
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4385568391093253
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5179976561192031
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3619435400628976
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3128400221632284
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.32179789892986727
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.1616440649589821
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.27749874434957306
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.33433785367487023
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.4103465595178302
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1616440649589821
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09249958144985769
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.06686757073497404
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.04103465595178302
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1616440649589821
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.27749874434957306
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.33433785367487023
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4103465595178302
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.27713659801328827
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23557945277558567
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24398402076434567
name: Cosine Map@100
---
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("minhdang/bge-base-financial-matryoshka_pass_2")
# Run inference
sentences = [
"['Tiêu chuẩn Chi cục trưởng, Phó Chi cục trưởng thuộc Cục Thuế\\n1. Vị trí và nhiệm vụ\\na) Chi cục trưởng Chi cục Thuế là người đứng đầu Chi cục Thuế, chịu trách nhiệm trước Cục trưởng Cục Thuế và trước pháp luật về toàn bộ hoạt động nhiệm vụ của đơn vị được cấp có thẩm quyền giao nhiệm vụ quản lý nhà nước trên địa bàn quận, huyện, thị xã, thành phố thuộc tỉnh.\\nb) Phó Chi cục trưởng Chi cục Thuế là người giúp việc Chi cục trưởng, chịu trách nhiệm trước Chi cục trưởng và trước pháp luật về lĩnh vực công tác được phân công; thay mặt Chi cục trưởng điều hành, giải quyết các công việc của Chi cục khi được Chi cục trưởng ủy quyền, giao nhiệm vụ.']",
'Nhiệm vụ của Chi cục trưởng thuộc Cục Thuế như thế nào?',
'Khoản phụ cấp chuyên cần có tính vào lương để tính tiền lương tăng ca, lương làm thêm giờ hay không?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2653 |
| cosine_accuracy@3 | 0.4377 |
| cosine_accuracy@5 | 0.5174 |
| cosine_accuracy@10 | 0.6099 |
| cosine_precision@1 | 0.2653 |
| cosine_precision@3 | 0.1459 |
| cosine_precision@5 | 0.1035 |
| cosine_precision@10 | 0.061 |
| cosine_recall@1 | 0.2653 |
| cosine_recall@3 | 0.4377 |
| cosine_recall@5 | 0.5174 |
| cosine_recall@10 | 0.6099 |
| cosine_ndcg@10 | 0.4285 |
| cosine_mrr@10 | 0.3715 |
| **cosine_map@100** | **0.3808** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2659 |
| cosine_accuracy@3 | 0.4323 |
| cosine_accuracy@5 | 0.5083 |
| cosine_accuracy@10 | 0.6015 |
| cosine_precision@1 | 0.2659 |
| cosine_precision@3 | 0.1441 |
| cosine_precision@5 | 0.1017 |
| cosine_precision@10 | 0.0602 |
| cosine_recall@1 | 0.2659 |
| cosine_recall@3 | 0.4323 |
| cosine_recall@5 | 0.5083 |
| cosine_recall@10 | 0.6015 |
| cosine_ndcg@10 | 0.4245 |
| cosine_mrr@10 | 0.3689 |
| **cosine_map@100** | **0.3781** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2484 |
| cosine_accuracy@3 | 0.4108 |
| cosine_accuracy@5 | 0.4802 |
| cosine_accuracy@10 | 0.5701 |
| cosine_precision@1 | 0.2484 |
| cosine_precision@3 | 0.1369 |
| cosine_precision@5 | 0.096 |
| cosine_precision@10 | 0.057 |
| cosine_recall@1 | 0.2484 |
| cosine_recall@3 | 0.4108 |
| cosine_recall@5 | 0.4802 |
| cosine_recall@10 | 0.5701 |
| cosine_ndcg@10 | 0.4006 |
| cosine_mrr@10 | 0.3473 |
| **cosine_map@100** | **0.3568** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2214 |
| cosine_accuracy@3 | 0.3702 |
| cosine_accuracy@5 | 0.4386 |
| cosine_accuracy@10 | 0.518 |
| cosine_precision@1 | 0.2214 |
| cosine_precision@3 | 0.1234 |
| cosine_precision@5 | 0.0877 |
| cosine_precision@10 | 0.0518 |
| cosine_recall@1 | 0.2214 |
| cosine_recall@3 | 0.3702 |
| cosine_recall@5 | 0.4386 |
| cosine_recall@10 | 0.518 |
| cosine_ndcg@10 | 0.3619 |
| cosine_mrr@10 | 0.3128 |
| **cosine_map@100** | **0.3218** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.1616 |
| cosine_accuracy@3 | 0.2775 |
| cosine_accuracy@5 | 0.3343 |
| cosine_accuracy@10 | 0.4103 |
| cosine_precision@1 | 0.1616 |
| cosine_precision@3 | 0.0925 |
| cosine_precision@5 | 0.0669 |
| cosine_precision@10 | 0.041 |
| cosine_recall@1 | 0.1616 |
| cosine_recall@3 | 0.2775 |
| cosine_recall@5 | 0.3343 |
| cosine_recall@10 | 0.4103 |
| cosine_ndcg@10 | 0.2771 |
| cosine_mrr@10 | 0.2356 |
| **cosine_map@100** | **0.244** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 107,510 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 209.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 25.12 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|
| <code>['Điều kiện thực hiện các quyền chuyển đổi, chuyển nhượng, cho thuê, cho thuê lại, thừa kế, tặng cho, thế chấp quyền sử dụng đất; góp vốn bằng quyền sử dụng đất\n1. Người sử dụng đất được thực hiện các quyền chuyển đổi, chuyển nhượng, cho thuê, cho thuê lại, thừa kế, tặng cho, thế chấp quyền sử dụng đất; góp vốn bằng quyền sử dụng đất khi có các điều kiện sau đây:\na) Có Giấy chứng nhận, trừ trường hợp quy định tại khoản 3 Điều 186 và trường hợp nhận thừa kế quy định tại khoản 1 Điều 168 của Luật này;\nb) Đất không có tranh chấp;\nc) Quyền sử dụng đất không bị kê biên để bảo đảm thi hành án;\nd) Trong thời hạn sử dụng đất.\n...']</code> | <code>Để tặng cho quyền sử dụng đất thì người sử dụng đất phải đảm bảo được những điều kiện nào?</code> |
| <code>['Vốn hoạt động của hợp tác xã\n1. Vốn hoạt động của hợp tác xã, liên hiệp hợp tác xã gồm vốn góp của thành viên, hợp tác xã thành viên, vốn huy động, vốn tích lũy, các quỹ của hợp tác xã, liên hiệp hợp tác xã; các khoản trợ cấp, hỗ trợ của Nhà nước, của các tổ chức, cá nhân trong nước và nước ngoài; các khoản được tặng, cho và các nguồn thu hợp pháp khác.\n2. Điều lệ, quy chế quản lý tài chính của hợp tác xã, liên hiệp hợp tác xã quy định cụ thể việc quản lý, sử dụng vốn hoạt động của hợp tác xã, liên hiệp hợp tác xã phù hợp với quy định của Luật Hợp tác xã và quy định của pháp luật có liên quan.']</code> | <code>Vốn hoạt động của hợp tác xã bao gồm những nguồn nào?</code> |
| <code>['Về kỹ năng\n- Sử dụng được công nghệ thông tin cơ bản theo quy định;\n- Xác định được yêu cầu của hệ thống cơ sở dữ liệu;\n- Cài đặt thành thạo phần mềm quản trị cơ sở dữ liệu;\n- Khai thác hiệu suất cao hệ thống cơ sở dữ liệu;\n- Quản lý an toàn hệ thống cơ sở dữ liệu;\n- Bảo trì được hệ thống;\n- Bảo mật được hệ thống cơ sở dữ liệu;\n- Nâng cấp được hệ thống cơ sở dữ liệu;\n- Xây dựng được ứng dụng;\n- Tích hợp được các hệ thống cơ sở dữ liệu;\n- Bảo trì, sửa chữa, nâng cấp được phần mềm và phần cứng của hệ thống mạng;\n- Xây dựng được các ứng dụng đơn giản trên hệ thống mạng;\n- Ghi được nhật ký cũng như báo cáo công việc, tiến độ công việc;\n- Thực hiện được các biện pháp vệ sinh công nghiệp, an toàn lao động;\n- Giao tiếp hiệu quả thông qua viết, thuyết trình, thảo luận, đàm phán, làm chủ tình huống;\n- Giám sát hệ thống công nghệ thông tin vừa và nhỏ;\n- Sử dụng được công nghệ thông tin cơ bản theo quy định; ứng dụng công nghệ thông tin trong một số công việc chuyên môn của ngành, nghề;\n- Sử dụng được ngoại ngữ cơ bản, đạt bậc 1/6 trong Khung năng lực ngoại ngữ của Việt Nam; ứng dụng được ngoại ngữ vào một số công việc chuyên môn của ngành, nghề.']</code> | <code>Người học ngành quản trị cơ sở dữ liệu trình độ trung cấp sau khi tốt nghiệp phải có kỹ năng ngoại ngữ như thế nào?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 11,946 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 210.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 24.98 tokens</li><li>max: 64 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>['Miễn nhiệm, cách chức Trưởng ban kiểm soát, Kiểm soát viên\n1. Trưởng ban kiểm soát, Kiểm soát viên bị miễn nhiệm trong các trường hợp sau đây:\na) Không còn đủ tiêu chuẩn và điều kiện theo quy định tại Điều 23 của Điều lệ này;\nb) Có đơn xin từ chức và được cơ quan đại diện chủ sở hữu chấp thuận;\nc) Được cơ quan đại diện chủ sở hữu hoặc cơ quan có thẩm quyền khác điều động, phân công thực hiện nhiệm vụ khác;\nd) Trường hợp khác theo quy định của pháp luật.\n...']</code> | <code>Việc miễn nhiệm Trưởng Ban kiểm soát Tổng công ty Giấy Việt Nam được thực hiện khi nào?</code> |
| <code>['Cấp giấy phép hoạt động tư vấn chuyên ngành điện thuộc thẩm quyền cấp của địa phương\n...\nc) Thành phần hồ sơ:\n- Văn bản đề nghị cấp giấy phép hoạt động điện lực theo Mẫu 01 quy định tại Phụ lục ban hành kèm theo Thông tư số 21/2020/TT-BCT .\n- Bản sao Giấy chứng nhận đăng ký doanh nghiệp hoặc Quyết định thành lập, Giấy chứng nhận thành lập (đối với các tổ chức không có Giấy chứng nhận đăng ký doanh nghiệp) của tổ chức đề nghị cấp giấy phép.\n- Danh sách trích ngang chuyên gia tư vấn đảm nhiệm chức danh chủ nhiệm, chức danh giám sát trưởng và các chuyên gia tư vấn khác theo Mẫu 3a quy định tại Phụ lục ban hành kèm theo Thông tư số 21/2020/TT-BCT ; bản sao bằng tốt nghiệp đại học trở lên, chứng chỉ hành nghề hoạt động xây dựng, hợp đồng lao động xác định thời hạn hoặc không xác định thời hạn của các chuyên gia tư vấn.\n- Tài liệu chứng minh kinh nghiệm của các chuyên gia tư vấn (Quyết định phân công nhiệm vụ, giấy xác nhận của các đơn vị có dự án mà chuyên gia đã thực hiện hoặc các tài liệu có giá trị tương đương).\n...']</code> | <code>Cần chuẩn bị những giấy tờ gì để thực hiện thủ tục cấp giấy phép hoạt động tư vấn thiết kế công trình đường dây và trạm biến áp có cấp điện áp đến 35kV?</code> |
| <code>['Điều 41. Tạm hoãn gọi nhập ngũ và miễn gọi nhập ngũ\n1. Tạm hoãn gọi nhập ngũ đối với những công dân sau đây:\na) Chưa đủ sức khỏe phục vụ tại ngũ theo kết luận của Hội đồng khám sức khỏe;\nb) Là lao động duy nhất phải trực tiếp nuôi dưỡng thân nhân không còn khả năng lao động hoặc chưa đến tuổi lao động; trong gia đình bị thiệt hại nặng về người và tài sản do tai nạn, thiên tai, dịch bệnh nguy hiểm gây ra được Ủy ban nhân dân cấp xã xác nhận;\nc) Một con của bệnh binh, người nhiễm chất độc da cam suy giảm khả năng lao động từ 61% đến 80%;\nd) Có anh, chị hoặc em ruột là hạ sĩ quan, binh sĩ đang phục vụ tại ngũ; hạ sĩ quan, chiến sĩ thực hiện nghĩa vụ tham gia Công an nhân dân;\nđ) Người thuộc diện di dân, giãn dân trong 03 năm đầu đến các xã đặc biệt khó khăn theo dự án phát triển kinh tế - xã hội của Nhà nước do Ủy ban nhân dân cấp tỉnh trở lên quyết định;\ne) Cán bộ, công chức, viên chức, thanh niên xung phong được điều động đến công tác, làm việc ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn theo quy định của pháp luật;\ng) Đang học tại cơ sở giáo dục phổ thông; đang được đào tạo trình độ đại học hệ chính quy thuộc cơ sở giáo dục đại học, trình độ cao đẳng hệ chính quy thuộc cơ sở giáo dục nghề nghiệp trong thời gian một khóa đào tạo của một trình độ đào tạo.\nh) Dân quân thường trực.\n2. Miễn gọi nhập ngũ đối với những công dân sau đây:\na) Con của liệt sĩ, con của thương binh hạng một;\nb) Một anh hoặc một em trai của liệt sĩ;\nc) Một con của thương binh hạng hai; một con của bệnh binh suy giảm khả năng lao động từ 81% trở lên; một con của người nhiễm chất độc da cam suy giảm khả năng lao động từ 81 % trở lên;\nd) Người làm công tác cơ yếu không phải là quân nhân, Công an nhân dân;\nđ) Cán bộ, công chức, viên chức, thanh niên xung phong được điều động đến công tác, làm việc ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn theo quy định của pháp luật từ 24 tháng trở lên.\n3. Công dân thuộc diện tạm hoãn gọi nhập ngũ quy định tại khoản 1 Điều này, nếu không còn lý do tạm hoãn thì được gọi nhập ngũ.\nCông dân thuộc diện được tạm hoãn gọi nhập ngũ hoặc được miễn gọi nhập ngũ quy định tại khoản 1 và khoản 2 Điều này, nếu tình nguyện thì được xem xét tuyển chọn và gọi nhập ngũ.\n4. Danh sách công dân thuộc diện được tạm hoãn gọi nhập ngũ, được miễn gọi nhập ngũ phải được niêm yết công khai tại trụ sở Ủy ban nhân dân cấp xã, cơ quan, tổ chức trong thời hạn 20 ngày.']</code> | <code>Liên quan đến tạm hoãn nghĩa vụ quân sự được pháp luật quy định như thế nào?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:------:|:----:|:-------------:|:------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.0952 | 10 | 2.1759 | - | - | - | - | - | - |
| 0.1905 | 20 | 1.4526 | - | - | - | - | - | - |
| 0.2857 | 30 | 1.4855 | - | - | - | - | - | - |
| 0.3810 | 40 | 1.5256 | - | - | - | - | - | - |
| 0.4762 | 50 | 1.6203 | - | - | - | - | - | - |
| 0.5714 | 60 | 1.6302 | - | - | - | - | - | - |
| 0.6667 | 70 | 1.8354 | - | - | - | - | - | - |
| 0.7619 | 80 | 1.4928 | - | - | - | - | - | - |
| 0.8571 | 90 | 1.6114 | - | - | - | - | - | - |
| 0.9524 | 100 | 1.5655 | - | - | - | - | - | - |
| 1.0 | 105 | - | 1.4307 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 1.0476 | 110 | 1.4171 | - | - | - | - | - | - |
| 1.1429 | 120 | 1.572 | - | - | - | - | - | - |
| 1.2381 | 130 | 1.3337 | - | - | - | - | - | - |
| 1.3333 | 140 | 1.2587 | - | - | - | - | - | - |
| 1.4286 | 150 | 1.3038 | - | - | - | - | - | - |
| 1.5238 | 160 | 1.5032 | - | - | - | - | - | - |
| 1.6190 | 170 | 1.1601 | - | - | - | - | - | - |
| 1.7143 | 180 | 1.2226 | - | - | - | - | - | - |
| 1.8095 | 190 | 1.1545 | - | - | - | - | - | - |
| 1.9048 | 200 | 1.2034 | - | - | - | - | - | - |
| 2.0 | 210 | 1.0695 | 1.1034 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 2.0952 | 220 | 1.0259 | - | - | - | - | - | - |
| 2.1905 | 230 | 0.8647 | - | - | - | - | - | - |
| 2.2857 | 240 | 0.901 | - | - | - | - | - | - |
| 2.3810 | 250 | 0.9261 | - | - | - | - | - | - |
| 2.4762 | 260 | 0.8719 | - | - | - | - | - | - |
| 2.5714 | 270 | 0.8008 | - | - | - | - | - | - |
| 2.6667 | 280 | 0.7091 | - | - | - | - | - | - |
| 2.7619 | 290 | 0.6592 | - | - | - | - | - | - |
| 2.8571 | 300 | 0.69 | - | - | - | - | - | - |
| 2.9524 | 310 | 0.739 | - | - | - | - | - | - |
| 3.0 | 315 | - | 0.8128 | 0.3218 | 0.3568 | 0.3781 | 0.2440 | 0.3808 |
| 3.0476 | 320 | 0.6944 | - | - | - | - | - | - |
| 3.1429 | 330 | 0.6414 | - | - | - | - | - | - |
| 3.2381 | 340 | 0.5874 | - | - | - | - | - | - |
| 3.3333 | 350 | 0.5979 | - | - | - | - | - | - |
| 3.4286 | 360 | 0.5409 | - | - | - | - | - | - |
| 3.5238 | 370 | 0.576 | - | - | - | - | - | - |
| 3.6190 | 380 | 0.5371 | - | - | - | - | - | - |
| 3.7143 | 390 | 0.5107 | - | - | - | - | - | - |
| 3.8095 | 400 | 0.4904 | - | - | - | - | - | - |
| 3.9048 | 410 | 0.5444 | - | - | - | - | - | - |
| 4.0 | 420 | 0.5389 | - | - | - | - | - | - |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.0.1
- Datasets: 2.19.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
abhishkgoel/gita-text-generation-gpt2
|
abhishkgoel
| 2024-11-01T08:06:12Z | 142 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T08:05:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
repleeka/eng-taw-nmt
|
repleeka
| 2024-11-01T08:05:58Z | 112 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"tawra (Digaro Mishmi)",
"english",
"NMT",
"translation",
"en",
"taw",
"base_model:repleeka/eng-tagin-nmt",
"base_model:finetune:repleeka/eng-tagin-nmt",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-11-01T04:13:22Z |
---
license: cc-by-nd-4.0
language:
- en
- taw
metrics:
- bleu
base_model:
- repleeka/eng-tagin-nmt
pipeline_tag: translation
library_name: transformers
tags:
- tawra (Digaro Mishmi)
- english
- NMT
---
# Model Card for Model ID
Digaro Mishmi, also known as Tawra, Taoran, Taraon, or Darang, is a member of the Digarish language family, spoken by the Mishmi people in northeastern Arunachal Pradesh, India, and parts of Zayü County, Tibet, China. The language has several autonyms, including tɑ31 rɑŋ53 or da31 raŋ53 in Arunachal Pradesh, and tɯŋ53 in China, where the Deng (登) also refer to the language. The language holds an essential place in the Anjaw district of Arunachal Pradesh, spoken in Hayuliang, Changlagam, and Goiliang circles, as well as in the Dibang Valley district and parts of Assam. Although Ethnologue’s 2001 census estimated around 35,000 native speakers, Digaro Mishmi remains critically under-resourced in terms of computational linguistics and digital preservation.
- source: Wikipedia
## Model Details
### Model Description
- **Developed by:** Tungon Dugi and Rushanti Kri
- **Dataset by:** Miss Rushanti Kri
- **Affiliation:** National Institute of Technology Arunachal Pradesh, India
- **Email:** [tungondugi@gmail.com](mailto:tungondugi@gmail.com) or [tungon.phd24@nitap.ac.in](mailto:tungon.phd24@nitap.ac.in)
- **Model type:** Translation
- **Language(s) (NLP):** English (en) and Tawra (taw)
- **Finetuned from model:** repleeka/eng-tagin-nmt
### Direct Use
This model can be used for translation and text-to-text generation.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("repleeka/eng-taw-nmt")
model = AutoModelForSeq2SeqLM.from_pretrained("repleeka/eng-taw-nmt")
```
## Training Details
### Training Data
[English-Tawra Corpus by Rushanti Kri](#)
## Evaluation
The model achieved the following metrics after 10 training epochs:
| Metric | Value |
|----------------------|-------------------|
| BLEU Score | 0.25157 |
| Evaluation Runtime | 644.278 seconds |
The model’s BLEU score suggests promising results, with the low evaluation loss indicating strong translation performance on the English-Tawra Corpus, suitable for practical applications. This model represents a significant advancement for Tawra language resources, enabling English-to-Tawra translation in NLP applications.
#### Summary
The `eng_taw_nmt` model is currently in its early phase of development. To enhance its performance, it requires a more substantial dataset and improved training resources. This would facilitate better generalization and accuracy in translating between English and Tawra, addressing the challenges faced by this extremely low-resource language. As the model evolves, ongoing efforts will be necessary to refine its capabilities further.
|
raju-alluri/gita-text-generation-gpt2
|
raju-alluri
| 2024-11-01T07:59:11Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T07:58:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF
|
mradermacher
| 2024-11-01T07:54:16Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct",
"base_model:quantized:DavidAU/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T04:42:05Z |
---
base_model: DavidAU/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q2_K.gguf) | Q2_K | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 9.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 10.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct-GGUF/resolve/main/MN-WORDSTORM-pt6-RCM-The-Writer-18.5B-Instruct.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Natthaphon/thaicapgen-clip-phayathai
|
Natthaphon
| 2024-11-01T07:41:52Z | 16 | 0 | null |
[
"safetensors",
"clip-encoder-decoder",
"image-to-text",
"image-captioning",
"custom_code",
"th",
"region:us"
] |
image-to-text
| 2024-11-01T04:22:32Z |
---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [CLIP encoder](https://huggingface.co/openai/clip-vit-base-patch32) and [PhayathaiBert](https://huggingface.co/clicknext/phayathaibert). Trained on Thai language MSCOCO and IPU24 dataset.
# Usage
Use `AutoModel` to load it. Requires `trust_remote_code=True`.
```python
from transformers import AutoModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-clip-gpt2'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]
|
Natthaphon/thaicapgen-clip-gpt2
|
Natthaphon
| 2024-11-01T07:41:22Z | 178 | 0 | null |
[
"safetensors",
"clip-encoder-decoder",
"image-to-text",
"image-captioning",
"custom_code",
"th",
"region:us"
] |
image-to-text
| 2024-10-30T05:25:16Z |
---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [CLIP encoder](https://huggingface.co/openai/clip-vit-base-patch32) and [GPT2](https://huggingface.co/openai-community/gpt2). Trained on Thai language MSCOCO and IPU24 dataset.
# Usage
Use `AutoModel` to load it. Requires `trust_remote_code=True`.
```python
from transformers import AutoModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-clip-gpt2'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]
|
Natthaphon/thaicapgen-swin-wangchan
|
Natthaphon
| 2024-11-01T07:41:01Z | 105 | 0 | null |
[
"safetensors",
"clip-encoder-decoder",
"image-to-text",
"image-captioning",
"custom_code",
"th",
"region:us"
] |
image-to-text
| 2024-10-30T03:57:13Z |
---
tags:
- image-to-text
- image-captioning
language:
- th
---
# Thai Image Captioning
Encoder-decoder style image captioning model using [Swin-L](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) and [Wangchanberta](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). Trained on Thai language MSCOCO and IPU24 dataset.
# Usage
With `VisionEncoderDecoderModel`.
```python
from transformers import VisionEncoderDecoderModel, AutoImageProcessor, AutoTokenizer
device = 'cuda'
gen_kwargs = {"max_length": 120, "num_beams": 4}
model_path = 'Natthaphon/thaicapgen-swin-wangchan'
feature_extractor = AutoImageProcessor.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = VisionEncoderDecoderModel.from_pretrained(model_path).to(device)
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
You can also use `AutoModel` to load it. But this requires `trust_remote_code=True`.
```python
from transformers import AutoModel
model_path = 'Natthaphon/thaicapgen-swin-wangchan'
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device)
```
# Acknowledgement
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107]
|
Xu-Ouyang/pythia-12b-deduped-int4-step2-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T07:36:37Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T07:25:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/ReWiz-Nemo-12B-Instruct-GGUF
|
mradermacher
| 2024-11-01T07:36:11Z | 14 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:theprint/ReWiz-Nemo-12B-Instruct",
"base_model:quantized:theprint/ReWiz-Nemo-12B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T04:31:39Z |
---
base_model: theprint/ReWiz-Nemo-12B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/theprint/ReWiz-Nemo-12B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ReWiz-Nemo-12B-Instruct-GGUF/resolve/main/ReWiz-Nemo-12B-Instruct.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf
|
RichardErkhov
| 2024-11-01T07:25:00Z | 130 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-11-01T04:29:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
KafkaLM-13B-German-V0.1-DPO - GGUF
- Model creator: https://huggingface.co/seedboxai/
- Original model: https://huggingface.co/seedboxai/KafkaLM-13B-German-V0.1-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [KafkaLM-13B-German-V0.1-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q2_K.gguf) | Q2_K | 4.52GB |
| [KafkaLM-13B-German-V0.1-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q3_K_S.gguf) | Q3_K_S | 1.51GB |
| [KafkaLM-13B-German-V0.1-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q3_K.gguf) | Q3_K | 5.9GB |
| [KafkaLM-13B-German-V0.1-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [KafkaLM-13B-German-V0.1-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [KafkaLM-13B-German-V0.1-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.IQ4_XS.gguf) | IQ4_XS | 0.76GB |
| [KafkaLM-13B-German-V0.1-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q4_0.gguf) | Q4_0 | 6.7GB |
| [KafkaLM-13B-German-V0.1-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.IQ4_NL.gguf) | IQ4_NL | 4.08GB |
| [KafkaLM-13B-German-V0.1-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q4_K_S.gguf) | Q4_K_S | 4.56GB |
| [KafkaLM-13B-German-V0.1-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q4_K.gguf) | Q4_K | 5.35GB |
| [KafkaLM-13B-German-V0.1-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q4_K_M.gguf) | Q4_K_M | 5.54GB |
| [KafkaLM-13B-German-V0.1-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q4_1.gguf) | Q4_1 | 7.61GB |
| [KafkaLM-13B-German-V0.1-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q5_0.gguf) | Q5_0 | 8.36GB |
| [KafkaLM-13B-German-V0.1-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [KafkaLM-13B-German-V0.1-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q5_K.gguf) | Q5_K | 8.6GB |
| [KafkaLM-13B-German-V0.1-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [KafkaLM-13B-German-V0.1-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q5_1.gguf) | Q5_1 | 9.1GB |
| [KafkaLM-13B-German-V0.1-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q6_K.gguf) | Q6_K | 9.95GB |
| [KafkaLM-13B-German-V0.1-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/seedboxai_-_KafkaLM-13B-German-V0.1-DPO-gguf/blob/main/KafkaLM-13B-German-V0.1-DPO.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
library_name: transformers
tags:
- llama2
- deutsch
- german
- seedbox
license: llama2
datasets:
- seedboxai/multitask_german_examples_32k
- seedboxai/ultra_feedback_german_modified_v1
language:
- de
pipeline_tag: text-generation
---

# KafkaLM-13B-German-V0.1
**KafkaLM 13b** is a Llama2 13b model - further pre-trained on a large German dataset from Björn Plüster and LAION. [leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b) - which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German).
KafkaLM 13b is a [Seedbox](https://huggingface.co/seedboxai) project trained by [Dennis Dickmann](https://huggingface.co/doubledsbv).
**Why Kafka?**
The models are proficient, yet creative, and have some tendencies to linguistically push boundaries 😊
## Model Details
The purpose of releasing the **KafkaLM series** is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks.
The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient.
### DPO
The model has been aligned with a german and modified version of the ultra feedback dataset from huggingface.
### Dataset
I used a 8k filtered version of the following [seedboxai/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxai/multitask_german_examples_32k)
### Prompt Format
This model follows the subsequent prompt format:
```
<|system|>
Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s>
<|user|>
Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s>
<|assistant|>
```
### Inference
Getting started with the model is straightforward
```python
import transformers
model_id = "seedboxai/KafkaLM-13B-German-V0.1-DPO"
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = "right"
tokenizer.pad_token = tokenizer.unk_token
tokenizer.add_eos_token = False
def generate_prompt(input):
prompt = ''
sys_prompt = "Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen."
prompt += f"<|system|>\n{sys_prompt.strip()}</s>\n"
prompt += f"<|user|>\n{input.strip()}</s>\n"
prompt += f"<|assistant|>\n"
return prompt.strip()
generate_text = transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True,
task='text-generation',
temperature=0.5,
max_new_tokens=512,
top_p=0.95,
top_k=50,
do_sample=True,
)
print(generate_text(generate_prompt("Wer ist eigentlich dieser Kafka?")))
```
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.
|
gaianet/internlm2_5-7b-chat-1m-GGUF
|
gaianet
| 2024-11-01T07:17:07Z | 110 | 0 | null |
[
"gguf",
"internlm2",
"text-generation",
"custom_code",
"base_model:internlm/internlm2_5-7b-chat-1m",
"base_model:quantized:internlm/internlm2_5-7b-chat-1m",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-29T07:04:21Z |
---
license: other
pipeline_tag: text-generation
base_model: internlm/internlm2_5-7b-chat-1m
model_creator: InternLM
model_name: internlm2_5-7b-chat-1m
quantized_by: Second State Inc.
---

# internlm2_5-7b-chat-1m-GGUF
## Original Model
[internlm/internlm2_5-7b-chat-1m](https://huggingface.co/internlm/internlm2_5-7b-chat-1m)
## Run with Gaianet
**Prompt template:**
prompt template: `chatml`
**Context size:**
chat_ctx_size: `1000000`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b3933*
|
chnaaam/gemma2-9b-it-oie-v0.1
|
chnaaam
| 2024-11-01T07:12:31Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T07:07:42Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nekodigi/rose
|
Nekodigi
| 2024-11-01T07:09:53Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-30T01:03:46Z |
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of rose
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Nekodigi/rose
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rose using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Antihero29/MeganLoraFlux
|
Antihero29
| 2024-11-01T07:08:16Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-11-01T07:04:56Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/80719079-ca0a-4388-b72f-2aa03924a365.png
- text: '-'
output:
url: images/5591425c-d6c2-4acc-b4fb-c4675b27a5b8.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: creativeml-openrail-m
---
# Megan Loras
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Antihero29/MeganLoraFlux/tree/main) them in the Files & versions tab.
|
jbsh/Single_GPU_test_Llama3-8B
|
jbsh
| 2024-11-01T07:07:03Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T06:56:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf
|
RichardErkhov
| 2024-11-01T06:57:47Z | 28 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-11-01T03:42:37Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UndiMix-v3-13B - GGUF
- Model creator: https://huggingface.co/Undi95/
- Original model: https://huggingface.co/Undi95/UndiMix-v3-13B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UndiMix-v3-13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q2_K.gguf) | Q2_K | 4.52GB |
| [UndiMix-v3-13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [UndiMix-v3-13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q3_K.gguf) | Q3_K | 5.9GB |
| [UndiMix-v3-13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [UndiMix-v3-13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q3_K_L.gguf) | Q3_K_L | 3.86GB |
| [UndiMix-v3-13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.IQ4_XS.gguf) | IQ4_XS | 5.35GB |
| [UndiMix-v3-13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q4_0.gguf) | Q4_0 | 6.86GB |
| [UndiMix-v3-13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.IQ4_NL.gguf) | IQ4_NL | 2.72GB |
| [UndiMix-v3-13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q4_K_S.gguf) | Q4_K_S | 3.77GB |
| [UndiMix-v3-13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q4_K.gguf) | Q4_K | 7.33GB |
| [UndiMix-v3-13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q4_K_M.gguf) | Q4_K_M | 3.1GB |
| [UndiMix-v3-13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q4_1.gguf) | Q4_1 | 0.93GB |
| [UndiMix-v3-13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q5_0.gguf) | Q5_0 | 7.03GB |
| [UndiMix-v3-13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [UndiMix-v3-13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q5_K.gguf) | Q5_K | 8.6GB |
| [UndiMix-v3-13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [UndiMix-v3-13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q5_1.gguf) | Q5_1 | 9.1GB |
| [UndiMix-v3-13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q6_K.gguf) | Q6_K | 9.95GB |
| [UndiMix-v3-13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Undi95_-_UndiMix-v3-13B-gguf/blob/main/UndiMix-v3-13B.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
license: cc-by-nc-4.0
---
<!-- description start -->
## Description
This repo contains fp16 files of personal mix : "UndiMix-v3".
It can be hot, serious, playful, and can use emoji thanks to llama-2-13b-chat-limarp-v2-merged.
What change from V2 is the that I didn't use Llama-2-13B-fp16 for the base anymore, and got straight into the merging with SLERP taking ReMM-S-Kimiko-v2-13B as a base.
<!-- description end -->
<!-- description start -->
## Models used
- Undi95/ReMM-S-Kimiko-v2-13B (0.272) (base)
- The-Face-Of-Goonery/Huginn-13b-v1.2 (0.264)
- Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (0.264)
- jondurbin/airoboros-l2-13b-2.1 (0.10)
- IkariDev/Athena-v1 (0.10)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Special thanks to Sushi kek
|
aiola/whisper-medusa-multilingual
|
aiola
| 2024-11-01T06:50:31Z | 11 | 1 | null |
[
"safetensors",
"whisper",
"ASR",
"Automatic Speech Recognition",
"Whisper",
"Medusa",
"Speech",
"Speculative Decoding",
"en",
"es",
"de",
"fr",
"dataset:facebook/voxpopuli",
"license:mit",
"region:us"
] | null | 2024-10-31T11:11:37Z |
---
license: mit
datasets:
- facebook/voxpopuli
tags:
- ASR
- Automatic Speech Recognition
- Whisper
- Medusa
- Speech
- Speculative Decoding
language:
- en
- es
- de
- fr
---
# Whisper Medusa
Whisper is an advanced encoder-decoder model for speech transcription and
translation, processing audio through encoding and decoding stages. Given
its large size and slow inference speed, various optimization strategies like
Faster-Whisper and Speculative Decoding have been proposed to enhance performance.
Our Medusa model builds on Whisper by predicting multiple tokens per iteration,
which significantly improves speed with small degradation in WER. We train and
evaluate our model on various datasets, demonstrating speed improvements.
---------
## Training Details
`aiola/whisper-medusa-multilingual` was trained on the Voxpopuli dataset to perform audio translation.
The Medusa heads were optimized for English, Spanish, German, and French so for optimal performance and speed improvements, please use these languages only.
---------
## Usage
To use `aiola/whisper-medusa-multilingual` install [`whisper-medusa`](https://github.com/aiola-lab/whisper-medusa) repo following the README instructions.
Inference can be done using the following code:
```python
import torch
import torchaudio
from whisper_medusa import WhisperMedusaModel
from transformers import WhisperProcessor
model_name = "aiola/whisper-medusa-multilingual"
model = WhisperMedusaModel.from_pretrained(model_name)
processor = WhisperProcessor.from_pretrained(model_name)
path_to_audio = "path/to/audio.wav"
SAMPLING_RATE = 16000
language = "en"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_speech, sr = torchaudio.load(path_to_audio)
if sr != SAMPLING_RATE:
input_speech = torchaudio.transforms.Resample(sr, SAMPLING_RATE)(input_speech)
input_features = processor(input_speech.squeeze(), return_tensors="pt", sampling_rate=SAMPLING_RATE).input_features
input_features = input_features.to(device)
model = model.to(device)
model_output = model.generate(
input_features,
language=language,
)
predict_ids = model_output[0]
pred = processor.decode(predict_ids, skip_special_tokens=True)
print(pred)
```
|
JSWOOK/finetuning_model
|
JSWOOK
| 2024-11-01T06:50:01Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-31T08:01:09Z |
---
library_name: transformers
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
model-index:
- name: finetuning_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning_model
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
LuongNam/donut-ocr-vn-tokenizer
|
LuongNam
| 2024-11-01T06:40:56Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"vi",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-12-26T05:31:35Z |
---
license: agpl-3.0
language:
- vi
---
|
featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF
|
featherless-ai-quants
| 2024-11-01T06:39:48Z | 17 | 0 | null |
[
"gguf",
"text-generation",
"base_model:failspy/Llama-3-8B-Instruct-MopeyMule",
"base_model:quantized:failspy/Llama-3-8B-Instruct-MopeyMule",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T06:31:14Z |
---
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# failspy/Llama-3-8B-Instruct-MopeyMule GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/failspy-Llama-3-8B-Instruct-MopeyMule-GGUF/blob/main/failspy-Llama-3-8B-Instruct-MopeyMule-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
tatsuya-n/distilbert-base-uncased-finetuned-emotion
|
tatsuya-n
| 2024-11-01T06:35:48Z | 121 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-14T15:34:22Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.935
- name: F1
type: f1
value: 0.9351600204177617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1588
- Accuracy: 0.935
- F1: 0.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2063 | 1.0 | 250 | 0.1769 | 0.9285 | 0.9288 |
| 0.1352 | 2.0 | 500 | 0.1588 | 0.935 | 0.9352 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
annutest/somethinglikedonut
|
annutest
| 2024-11-01T06:35:48Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-29T09:45:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Styxxxx/llama2_7b_lora-wnli
|
Styxxxx
| 2024-11-01T06:31:31Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:31:21Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_tren
|
Styxxxx
| 2024-11-01T06:30:55Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:30:47Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_ruen
|
Styxxxx
| 2024-11-01T06:30:20Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:30:13Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_roen
|
Styxxxx
| 2024-11-01T06:29:46Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:29:39Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Ariffiq99/Randomized_Roberta_Stacked_model_20
|
Ariffiq99
| 2024-11-01T06:29:26Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-11-01T05:51:44Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Randomized_Roberta_Stacked_model_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Randomized_Roberta_Stacked_model_20
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9094
- F1: 0.6756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 316 | 1.0130 | 0.6156 |
| 1.1549 | 2.0 | 632 | 0.9246 | 0.6597 |
| 1.1549 | 3.0 | 948 | 0.9153 | 0.6697 |
| 0.8702 | 4.0 | 1264 | 0.9125 | 0.6720 |
| 0.7606 | 5.0 | 1580 | 0.9094 | 0.6756 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_deen
|
Styxxxx
| 2024-11-01T06:28:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:28:29Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Styxxxx/llama2_7b_lora-wmt16_translate_csen
|
Styxxxx
| 2024-11-01T06:28:04Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:27:54Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
joe611/chickens-composite-403232323232-150-epochs-w-transform-metrics-test
|
joe611
| 2024-11-01T06:21:37Z | 27 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-10-31T23:55:07Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: chickens-composite-403232323232-150-epochs-w-transform-metrics-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chickens-composite-403232323232-150-epochs-w-transform-metrics-test
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2394
- Map: 0.8413
- Map 50: 0.9641
- Map 75: 0.9341
- Map Small: 0.3268
- Map Medium: 0.8408
- Map Large: 0.8507
- Mar 1: 0.3376
- Mar 10: 0.8711
- Mar 100: 0.8749
- Mar Small: 0.3947
- Mar Medium: 0.8792
- Mar Large: 0.881
- Map Chicken: 0.8309
- Mar 100 Chicken: 0.8738
- Map Duck: 0.7956
- Mar 100 Duck: 0.8294
- Map Plant: 0.8973
- Mar 100 Plant: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Chicken | Mar 100 Chicken | Map Duck | Mar 100 Duck | Map Plant | Mar 100 Plant |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-----------:|:---------------:|:--------:|:------------:|:---------:|:-------------:|
| 1.1267 | 1.0 | 1000 | 1.2143 | 0.2128 | 0.3053 | 0.2462 | 0.0212 | 0.1247 | 0.2574 | 0.1042 | 0.3253 | 0.3593 | 0.0767 | 0.3264 | 0.3725 | 0.1029 | 0.3376 | 0.0023 | 0.0036 | 0.5333 | 0.7365 |
| 1.1423 | 2.0 | 2000 | 1.0583 | 0.2633 | 0.384 | 0.2877 | 0.0549 | 0.1816 | 0.2767 | 0.1024 | 0.3946 | 0.4425 | 0.13 | 0.4032 | 0.4552 | 0.1457 | 0.5795 | 0.0 | 0.0 | 0.6442 | 0.7479 |
| 0.9168 | 3.0 | 3000 | 0.9335 | 0.3008 | 0.4485 | 0.341 | 0.0694 | 0.2511 | 0.3019 | 0.1107 | 0.4104 | 0.4208 | 0.1154 | 0.3965 | 0.4023 | 0.2437 | 0.534 | 0.0 | 0.0 | 0.6588 | 0.7284 |
| 0.8732 | 4.0 | 4000 | 0.8630 | 0.3268 | 0.4634 | 0.3815 | 0.0599 | 0.2883 | 0.3552 | 0.1214 | 0.4573 | 0.49 | 0.1374 | 0.4629 | 0.5136 | 0.2845 | 0.7165 | 0.0 | 0.0 | 0.6958 | 0.7536 |
| 0.7036 | 5.0 | 5000 | 0.7595 | 0.3359 | 0.4642 | 0.3931 | 0.0739 | 0.2912 | 0.3764 | 0.1255 | 0.4737 | 0.5037 | 0.1679 | 0.4693 | 0.5219 | 0.2899 | 0.7404 | 0.0 | 0.0 | 0.7178 | 0.7707 |
| 0.979 | 6.0 | 6000 | 0.7069 | 0.3728 | 0.5214 | 0.4438 | 0.0768 | 0.3332 | 0.4012 | 0.1296 | 0.4833 | 0.4882 | 0.1172 | 0.4582 | 0.5065 | 0.4058 | 0.7012 | 0.0 | 0.0 | 0.7125 | 0.7633 |
| 0.7254 | 7.0 | 7000 | 0.6566 | 0.3939 | 0.5385 | 0.4648 | 0.0613 | 0.3571 | 0.4192 | 0.1369 | 0.4969 | 0.5026 | 0.153 | 0.4737 | 0.5184 | 0.4487 | 0.7254 | 0.0 | 0.0 | 0.7331 | 0.7825 |
| 0.6696 | 8.0 | 8000 | 0.6276 | 0.4213 | 0.5797 | 0.5037 | 0.0573 | 0.3791 | 0.4437 | 0.1378 | 0.4964 | 0.4996 | 0.1033 | 0.467 | 0.5195 | 0.5311 | 0.7203 | 0.0 | 0.0 | 0.7327 | 0.7786 |
| 0.6583 | 9.0 | 9000 | 0.6021 | 0.4331 | 0.5873 | 0.5108 | 0.0796 | 0.3986 | 0.4557 | 0.1397 | 0.5046 | 0.5091 | 0.1539 | 0.478 | 0.5262 | 0.5463 | 0.7264 | 0.0 | 0.0 | 0.7528 | 0.8009 |
| 0.5876 | 10.0 | 10000 | 0.5796 | 0.4447 | 0.6032 | 0.5274 | 0.1098 | 0.4084 | 0.4701 | 0.1426 | 0.5064 | 0.5112 | 0.1833 | 0.4843 | 0.5263 | 0.5865 | 0.7376 | 0.0 | 0.0 | 0.7475 | 0.7961 |
| 0.4736 | 11.0 | 11000 | 0.5645 | 0.4457 | 0.6013 | 0.5331 | 0.1092 | 0.4073 | 0.4769 | 0.1397 | 0.5073 | 0.5115 | 0.2014 | 0.4811 | 0.5278 | 0.5734 | 0.7258 | 0.0 | 0.0 | 0.7638 | 0.8087 |
| 0.6139 | 12.0 | 12000 | 0.5577 | 0.4428 | 0.5984 | 0.5244 | 0.0509 | 0.4043 | 0.4745 | 0.1392 | 0.5128 | 0.5177 | 0.1709 | 0.4876 | 0.5341 | 0.5648 | 0.7408 | 0.0 | 0.0 | 0.7636 | 0.8124 |
| 0.7356 | 13.0 | 13000 | 0.5289 | 0.4651 | 0.6128 | 0.5378 | 0.048 | 0.4393 | 0.489 | 0.146 | 0.5238 | 0.5272 | 0.1102 | 0.5075 | 0.5375 | 0.6242 | 0.762 | 0.0 | 0.0 | 0.7711 | 0.8197 |
| 0.5112 | 14.0 | 14000 | 0.5340 | 0.4658 | 0.6239 | 0.5556 | 0.0889 | 0.4309 | 0.4849 | 0.1444 | 0.5135 | 0.5164 | 0.1865 | 0.4892 | 0.5316 | 0.6346 | 0.7404 | 0.0 | 0.0 | 0.7627 | 0.8087 |
| 0.5302 | 15.0 | 15000 | 0.5103 | 0.478 | 0.6328 | 0.5738 | 0.0727 | 0.4478 | 0.4967 | 0.1478 | 0.5216 | 0.5244 | 0.2081 | 0.498 | 0.54 | 0.6635 | 0.7571 | 0.0 | 0.0 | 0.7705 | 0.816 |
| 0.4855 | 16.0 | 16000 | 0.5183 | 0.5038 | 0.6727 | 0.6016 | 0.1285 | 0.4726 | 0.5257 | 0.1738 | 0.5454 | 0.5486 | 0.2061 | 0.5213 | 0.5649 | 0.6608 | 0.7408 | 0.0921 | 0.0928 | 0.7587 | 0.8121 |
| 0.4891 | 17.0 | 17000 | 0.4886 | 0.6311 | 0.8473 | 0.7764 | 0.1735 | 0.612 | 0.6078 | 0.2544 | 0.6734 | 0.6769 | 0.2801 | 0.6651 | 0.6446 | 0.676 | 0.7467 | 0.4514 | 0.4758 | 0.7659 | 0.8082 |
| 0.5348 | 18.0 | 18000 | 0.4468 | 0.6909 | 0.8959 | 0.8328 | 0.1362 | 0.6815 | 0.6633 | 0.2864 | 0.733 | 0.7378 | 0.2592 | 0.7358 | 0.701 | 0.7055 | 0.7714 | 0.586 | 0.6165 | 0.7811 | 0.8256 |
| 0.5136 | 19.0 | 19000 | 0.4364 | 0.6945 | 0.9219 | 0.8428 | 0.1052 | 0.6706 | 0.73 | 0.2905 | 0.7425 | 0.7467 | 0.254 | 0.7333 | 0.7721 | 0.6837 | 0.7541 | 0.6198 | 0.6603 | 0.78 | 0.8256 |
| 0.7763 | 20.0 | 20000 | 0.4125 | 0.7082 | 0.9235 | 0.8439 | 0.115 | 0.6876 | 0.729 | 0.2988 | 0.7549 | 0.7599 | 0.2393 | 0.7483 | 0.7776 | 0.6954 | 0.763 | 0.6381 | 0.6794 | 0.791 | 0.8374 |
| 0.531 | 21.0 | 21000 | 0.4182 | 0.7035 | 0.9164 | 0.8414 | 0.1179 | 0.6748 | 0.7277 | 0.297 | 0.7502 | 0.7537 | 0.2354 | 0.7369 | 0.7777 | 0.6977 | 0.7618 | 0.6212 | 0.6639 | 0.7917 | 0.8354 |
| 0.5738 | 22.0 | 22000 | 0.4124 | 0.704 | 0.924 | 0.852 | 0.1622 | 0.6842 | 0.7077 | 0.2951 | 0.7477 | 0.7512 | 0.2837 | 0.7373 | 0.7539 | 0.6921 | 0.7501 | 0.6329 | 0.6696 | 0.787 | 0.8338 |
| 0.4659 | 23.0 | 23000 | 0.3881 | 0.7229 | 0.9359 | 0.8721 | 0.1295 | 0.6982 | 0.7482 | 0.3033 | 0.7637 | 0.7699 | 0.3028 | 0.7558 | 0.7945 | 0.7215 | 0.7793 | 0.6537 | 0.6887 | 0.7935 | 0.8418 |
| 0.476 | 24.0 | 24000 | 0.4041 | 0.6974 | 0.9329 | 0.8277 | 0.1692 | 0.6708 | 0.7309 | 0.2946 | 0.7399 | 0.7446 | 0.2606 | 0.7272 | 0.7784 | 0.6745 | 0.7354 | 0.6167 | 0.6562 | 0.8008 | 0.8422 |
| 0.4144 | 25.0 | 25000 | 0.3697 | 0.7384 | 0.9476 | 0.8756 | 0.1688 | 0.7133 | 0.7618 | 0.3101 | 0.7763 | 0.7809 | 0.2761 | 0.7666 | 0.8109 | 0.7223 | 0.773 | 0.6905 | 0.7237 | 0.8024 | 0.846 |
| 0.6979 | 26.0 | 26000 | 0.3699 | 0.7347 | 0.9514 | 0.8711 | 0.1661 | 0.7181 | 0.73 | 0.3042 | 0.7751 | 0.7798 | 0.274 | 0.7739 | 0.7747 | 0.7269 | 0.7813 | 0.6696 | 0.7113 | 0.8076 | 0.8469 |
| 0.5105 | 27.0 | 27000 | 0.3672 | 0.7439 | 0.9498 | 0.8953 | 0.1973 | 0.7226 | 0.7565 | 0.3107 | 0.7836 | 0.7882 | 0.3234 | 0.7824 | 0.7989 | 0.7272 | 0.7801 | 0.7013 | 0.7397 | 0.8031 | 0.845 |
| 0.4635 | 28.0 | 28000 | 0.3588 | 0.7511 | 0.9547 | 0.8886 | 0.246 | 0.7298 | 0.7617 | 0.3085 | 0.7913 | 0.7971 | 0.3761 | 0.7844 | 0.8112 | 0.7263 | 0.7821 | 0.7119 | 0.7521 | 0.815 | 0.8571 |
| 0.5447 | 29.0 | 29000 | 0.4269 | 0.7002 | 0.9507 | 0.8595 | 0.2218 | 0.6811 | 0.7176 | 0.2901 | 0.7457 | 0.75 | 0.3412 | 0.7405 | 0.7706 | 0.6529 | 0.7137 | 0.6597 | 0.7052 | 0.7882 | 0.8313 |
| 0.3582 | 30.0 | 30000 | 0.3582 | 0.7525 | 0.9477 | 0.8899 | 0.1913 | 0.7345 | 0.7843 | 0.3133 | 0.791 | 0.7963 | 0.3302 | 0.785 | 0.8261 | 0.7305 | 0.7865 | 0.7022 | 0.7356 | 0.8248 | 0.8668 |
| 0.5428 | 31.0 | 31000 | 0.3839 | 0.7306 | 0.9549 | 0.8861 | 0.157 | 0.707 | 0.7562 | 0.3038 | 0.772 | 0.7767 | 0.291 | 0.7621 | 0.8003 | 0.7048 | 0.766 | 0.6953 | 0.7356 | 0.7918 | 0.8285 |
| 0.4257 | 32.0 | 32000 | 0.3435 | 0.7691 | 0.9579 | 0.8979 | 0.1462 | 0.7552 | 0.7886 | 0.3172 | 0.806 | 0.8112 | 0.2958 | 0.8032 | 0.8352 | 0.7551 | 0.8058 | 0.732 | 0.7675 | 0.8203 | 0.8601 |
| 0.4555 | 33.0 | 33000 | 0.3404 | 0.7594 | 0.9571 | 0.9004 | 0.1775 | 0.7488 | 0.7696 | 0.3116 | 0.7999 | 0.8043 | 0.2972 | 0.7983 | 0.8178 | 0.7524 | 0.8014 | 0.7026 | 0.7495 | 0.8232 | 0.8622 |
| 0.4041 | 34.0 | 34000 | 0.3376 | 0.7636 | 0.9597 | 0.8954 | 0.2114 | 0.7438 | 0.7815 | 0.3155 | 0.8009 | 0.8066 | 0.335 | 0.7991 | 0.8226 | 0.754 | 0.8028 | 0.7136 | 0.7562 | 0.8233 | 0.8608 |
| 0.4214 | 35.0 | 35000 | 0.3321 | 0.763 | 0.9569 | 0.9045 | 0.2104 | 0.7405 | 0.7754 | 0.3153 | 0.8026 | 0.8058 | 0.3039 | 0.7932 | 0.8216 | 0.7431 | 0.8006 | 0.7167 | 0.7521 | 0.8291 | 0.8648 |
| 0.4448 | 36.0 | 36000 | 0.3371 | 0.7682 | 0.9618 | 0.9098 | 0.2417 | 0.7433 | 0.7862 | 0.3155 | 0.8044 | 0.8084 | 0.33 | 0.7944 | 0.8259 | 0.755 | 0.799 | 0.7253 | 0.7613 | 0.8242 | 0.8648 |
| 0.4959 | 37.0 | 37000 | 0.3453 | 0.7548 | 0.9564 | 0.9069 | 0.1768 | 0.7334 | 0.7937 | 0.3097 | 0.7944 | 0.7977 | 0.3251 | 0.7796 | 0.8376 | 0.7553 | 0.8038 | 0.701 | 0.7371 | 0.8082 | 0.8521 |
| 0.4388 | 38.0 | 38000 | 0.3390 | 0.7615 | 0.961 | 0.9044 | 0.1391 | 0.7481 | 0.7754 | 0.311 | 0.7998 | 0.8038 | 0.277 | 0.7954 | 0.8243 | 0.7523 | 0.8012 | 0.7146 | 0.7531 | 0.8175 | 0.8572 |
| 0.364 | 39.0 | 39000 | 0.3301 | 0.7624 | 0.95 | 0.8996 | 0.1968 | 0.758 | 0.7724 | 0.3122 | 0.8012 | 0.8046 | 0.2741 | 0.8034 | 0.8187 | 0.7593 | 0.8119 | 0.7028 | 0.7371 | 0.825 | 0.8649 |
| 0.4423 | 40.0 | 40000 | 0.3265 | 0.7682 | 0.951 | 0.8942 | 0.1885 | 0.7628 | 0.7604 | 0.3178 | 0.8099 | 0.8139 | 0.3056 | 0.8098 | 0.8128 | 0.7557 | 0.8115 | 0.7204 | 0.7577 | 0.8285 | 0.8726 |
| 0.3772 | 41.0 | 41000 | 0.3485 | 0.7493 | 0.9639 | 0.9004 | 0.2278 | 0.7341 | 0.7674 | 0.3049 | 0.7884 | 0.7944 | 0.3943 | 0.7842 | 0.8144 | 0.7131 | 0.7694 | 0.7034 | 0.7443 | 0.8314 | 0.8694 |
| 0.4682 | 42.0 | 42000 | 0.3570 | 0.7437 | 0.9572 | 0.894 | 0.2024 | 0.7235 | 0.7549 | 0.3045 | 0.782 | 0.7869 | 0.331 | 0.7728 | 0.8017 | 0.7434 | 0.794 | 0.6911 | 0.7299 | 0.7965 | 0.837 |
| 0.4829 | 43.0 | 43000 | 0.3295 | 0.7733 | 0.9559 | 0.9061 | 0.2047 | 0.7635 | 0.7853 | 0.3174 | 0.8091 | 0.8123 | 0.2933 | 0.8077 | 0.8238 | 0.77 | 0.8153 | 0.7311 | 0.767 | 0.8189 | 0.8546 |
| 0.4646 | 44.0 | 44000 | 0.3219 | 0.7697 | 0.9622 | 0.9075 | 0.2313 | 0.756 | 0.7799 | 0.3135 | 0.808 | 0.8135 | 0.367 | 0.8049 | 0.8236 | 0.7647 | 0.8129 | 0.7122 | 0.7577 | 0.8321 | 0.87 |
| 0.3236 | 45.0 | 45000 | 0.3212 | 0.7713 | 0.961 | 0.9025 | 0.2236 | 0.7623 | 0.7637 | 0.3156 | 0.8108 | 0.8148 | 0.3744 | 0.8115 | 0.8092 | 0.7576 | 0.8139 | 0.7256 | 0.7624 | 0.8307 | 0.8683 |
| 0.3759 | 46.0 | 46000 | 0.3281 | 0.756 | 0.9604 | 0.8943 | 0.2171 | 0.7466 | 0.7606 | 0.3069 | 0.7945 | 0.8001 | 0.3131 | 0.7971 | 0.8076 | 0.7556 | 0.8042 | 0.6797 | 0.7237 | 0.8326 | 0.8725 |
| 0.3627 | 47.0 | 47000 | 0.3181 | 0.7674 | 0.9546 | 0.9086 | 0.2258 | 0.7583 | 0.7736 | 0.3161 | 0.8091 | 0.8149 | 0.3537 | 0.8108 | 0.8225 | 0.7595 | 0.8161 | 0.7131 | 0.7577 | 0.8297 | 0.8709 |
| 0.4146 | 48.0 | 48000 | 0.3072 | 0.7807 | 0.9641 | 0.9209 | 0.2599 | 0.7662 | 0.8023 | 0.3178 | 0.8212 | 0.8257 | 0.3962 | 0.8184 | 0.8447 | 0.7758 | 0.8249 | 0.7334 | 0.7758 | 0.833 | 0.8763 |
| 0.4113 | 49.0 | 49000 | 0.3452 | 0.74 | 0.9649 | 0.8877 | 0.258 | 0.7318 | 0.7372 | 0.3014 | 0.7885 | 0.793 | 0.374 | 0.784 | 0.7922 | 0.7012 | 0.7678 | 0.6958 | 0.7443 | 0.8229 | 0.867 |
| 0.4146 | 50.0 | 50000 | 0.3091 | 0.7822 | 0.9588 | 0.9016 | 0.2296 | 0.7753 | 0.7786 | 0.3203 | 0.82 | 0.8255 | 0.3581 | 0.8178 | 0.8262 | 0.772 | 0.8284 | 0.7344 | 0.7716 | 0.8404 | 0.8766 |
| 0.3777 | 51.0 | 51000 | 0.3168 | 0.7772 | 0.9559 | 0.9072 | 0.1974 | 0.7684 | 0.7844 | 0.3203 | 0.8163 | 0.8209 | 0.2859 | 0.816 | 0.8346 | 0.7685 | 0.8183 | 0.7287 | 0.7696 | 0.8343 | 0.8747 |
| 0.3417 | 52.0 | 52000 | 0.3135 | 0.7745 | 0.958 | 0.9068 | 0.2364 | 0.7548 | 0.804 | 0.3164 | 0.8137 | 0.8183 | 0.3284 | 0.8071 | 0.8433 | 0.7776 | 0.828 | 0.7113 | 0.7515 | 0.8347 | 0.8754 |
| 0.4088 | 53.0 | 53000 | 0.3145 | 0.7689 | 0.9569 | 0.9183 | 0.2712 | 0.7593 | 0.7631 | 0.3134 | 0.8108 | 0.8147 | 0.3583 | 0.8093 | 0.8093 | 0.7593 | 0.8117 | 0.7116 | 0.7557 | 0.8358 | 0.8767 |
| 0.4384 | 54.0 | 54000 | 0.2973 | 0.7848 | 0.9618 | 0.9157 | 0.2523 | 0.7746 | 0.7875 | 0.3201 | 0.8243 | 0.8284 | 0.3324 | 0.8227 | 0.8343 | 0.768 | 0.8203 | 0.7407 | 0.782 | 0.8456 | 0.8828 |
| 0.3848 | 55.0 | 55000 | 0.3071 | 0.7806 | 0.9572 | 0.911 | 0.2747 | 0.7711 | 0.7916 | 0.3205 | 0.8198 | 0.8234 | 0.3651 | 0.8158 | 0.8349 | 0.7773 | 0.8288 | 0.7303 | 0.767 | 0.834 | 0.8744 |
| 0.4163 | 56.0 | 56000 | 0.3055 | 0.7775 | 0.9568 | 0.9054 | 0.2209 | 0.7647 | 0.784 | 0.3201 | 0.8194 | 0.8221 | 0.3228 | 0.8135 | 0.8257 | 0.7653 | 0.8181 | 0.7328 | 0.7789 | 0.8343 | 0.8694 |
| 0.4013 | 57.0 | 57000 | 0.3098 | 0.7777 | 0.9568 | 0.9094 | 0.2633 | 0.764 | 0.8032 | 0.3181 | 0.8165 | 0.8202 | 0.3683 | 0.8118 | 0.8402 | 0.7674 | 0.8219 | 0.7292 | 0.7696 | 0.8365 | 0.8691 |
| 0.2877 | 58.0 | 58000 | 0.2941 | 0.7872 | 0.9582 | 0.9177 | 0.2115 | 0.781 | 0.7936 | 0.3202 | 0.8214 | 0.8249 | 0.2915 | 0.8268 | 0.8291 | 0.7804 | 0.8294 | 0.7313 | 0.7629 | 0.8499 | 0.8825 |
| 0.4487 | 59.0 | 59000 | 0.2830 | 0.7979 | 0.9649 | 0.9219 | 0.2545 | 0.7887 | 0.8174 | 0.322 | 0.835 | 0.8389 | 0.377 | 0.8374 | 0.8519 | 0.7902 | 0.8378 | 0.7438 | 0.7881 | 0.8597 | 0.8907 |
| 0.3549 | 60.0 | 60000 | 0.3007 | 0.7947 | 0.9604 | 0.9171 | 0.3052 | 0.7754 | 0.8257 | 0.3241 | 0.8311 | 0.8339 | 0.3913 | 0.8184 | 0.8606 | 0.7931 | 0.8414 | 0.7457 | 0.783 | 0.8452 | 0.8771 |
| 0.3725 | 61.0 | 61000 | 0.3180 | 0.7636 | 0.9581 | 0.9079 | 0.2728 | 0.7514 | 0.7981 | 0.3094 | 0.8045 | 0.8077 | 0.3622 | 0.798 | 0.8388 | 0.7511 | 0.8074 | 0.701 | 0.7433 | 0.8388 | 0.8723 |
| 0.3535 | 62.0 | 62000 | 0.3056 | 0.7781 | 0.962 | 0.9129 | 0.2116 | 0.7632 | 0.8043 | 0.3189 | 0.8165 | 0.8191 | 0.3281 | 0.8074 | 0.8377 | 0.7639 | 0.8173 | 0.7345 | 0.7686 | 0.8359 | 0.8713 |
| 0.4038 | 63.0 | 63000 | 0.2947 | 0.7899 | 0.9584 | 0.9148 | 0.2826 | 0.7853 | 0.8087 | 0.3239 | 0.8263 | 0.8289 | 0.3443 | 0.8273 | 0.8401 | 0.7778 | 0.8282 | 0.7416 | 0.7784 | 0.8502 | 0.8802 |
| 0.4424 | 64.0 | 64000 | 0.2922 | 0.7939 | 0.9606 | 0.9113 | 0.241 | 0.7862 | 0.817 | 0.3225 | 0.8305 | 0.8342 | 0.3485 | 0.8289 | 0.8462 | 0.7892 | 0.8402 | 0.742 | 0.7804 | 0.8505 | 0.882 |
| 0.3878 | 65.0 | 65000 | 0.2872 | 0.799 | 0.9605 | 0.9221 | 0.2855 | 0.7922 | 0.8179 | 0.3216 | 0.8324 | 0.8358 | 0.3819 | 0.8297 | 0.8544 | 0.7892 | 0.837 | 0.7498 | 0.7809 | 0.858 | 0.8895 |
| 0.3628 | 66.0 | 66000 | 0.2975 | 0.7879 | 0.9592 | 0.9135 | 0.2664 | 0.7765 | 0.8108 | 0.3237 | 0.8232 | 0.8273 | 0.3577 | 0.8191 | 0.8456 | 0.791 | 0.8364 | 0.7272 | 0.7655 | 0.8456 | 0.8799 |
| 0.3657 | 67.0 | 67000 | 0.2714 | 0.8096 | 0.9656 | 0.9242 | 0.2913 | 0.799 | 0.8191 | 0.3297 | 0.8438 | 0.8486 | 0.4078 | 0.8418 | 0.8561 | 0.8102 | 0.8543 | 0.7621 | 0.8036 | 0.8566 | 0.8879 |
| 0.3254 | 68.0 | 68000 | 0.2934 | 0.793 | 0.9615 | 0.919 | 0.2837 | 0.7855 | 0.8032 | 0.3232 | 0.8291 | 0.8325 | 0.3423 | 0.8287 | 0.8382 | 0.7734 | 0.8199 | 0.7553 | 0.7948 | 0.8502 | 0.8828 |
| 0.4403 | 69.0 | 69000 | 0.2675 | 0.8138 | 0.9639 | 0.9218 | 0.2836 | 0.8104 | 0.8154 | 0.3296 | 0.8477 | 0.8539 | 0.4037 | 0.8523 | 0.856 | 0.8099 | 0.8592 | 0.7637 | 0.801 | 0.8677 | 0.9016 |
| 0.3707 | 70.0 | 70000 | 0.2790 | 0.8017 | 0.9582 | 0.923 | 0.2908 | 0.7982 | 0.8042 | 0.3243 | 0.8376 | 0.8421 | 0.398 | 0.8389 | 0.8458 | 0.8024 | 0.8515 | 0.7398 | 0.7809 | 0.8629 | 0.8939 |
| 0.3141 | 71.0 | 71000 | 0.2897 | 0.791 | 0.9656 | 0.9244 | 0.2887 | 0.7846 | 0.81 | 0.3221 | 0.8283 | 0.8329 | 0.3876 | 0.8297 | 0.8479 | 0.7686 | 0.8171 | 0.7479 | 0.7892 | 0.8564 | 0.8924 |
| 0.3931 | 72.0 | 72000 | 0.2896 | 0.7999 | 0.9558 | 0.9205 | 0.2829 | 0.7915 | 0.8152 | 0.3241 | 0.8333 | 0.8371 | 0.3537 | 0.8314 | 0.8531 | 0.7957 | 0.8416 | 0.7506 | 0.784 | 0.8534 | 0.8857 |
| 0.3108 | 73.0 | 73000 | 0.2754 | 0.8067 | 0.9674 | 0.9317 | 0.309 | 0.8031 | 0.802 | 0.3267 | 0.8413 | 0.8465 | 0.4119 | 0.8443 | 0.8441 | 0.7807 | 0.8288 | 0.7709 | 0.8124 | 0.8684 | 0.8983 |
| 0.3259 | 74.0 | 74000 | 0.2741 | 0.8073 | 0.9645 | 0.9314 | 0.2921 | 0.7985 | 0.8422 | 0.327 | 0.8429 | 0.8477 | 0.4056 | 0.8404 | 0.8717 | 0.8031 | 0.8497 | 0.7619 | 0.8031 | 0.857 | 0.8902 |
| 0.3673 | 75.0 | 75000 | 0.2774 | 0.8075 | 0.9697 | 0.9246 | 0.2445 | 0.8 | 0.822 | 0.3252 | 0.8427 | 0.8478 | 0.4028 | 0.8408 | 0.8602 | 0.7883 | 0.8348 | 0.7663 | 0.8098 | 0.8681 | 0.8988 |
| 0.3785 | 76.0 | 76000 | 0.2867 | 0.8005 | 0.962 | 0.9259 | 0.2625 | 0.7912 | 0.8152 | 0.3243 | 0.8353 | 0.8393 | 0.3776 | 0.8332 | 0.8547 | 0.789 | 0.8382 | 0.7507 | 0.7887 | 0.8619 | 0.8911 |
| 0.3441 | 77.0 | 77000 | 0.2762 | 0.8081 | 0.9584 | 0.9263 | 0.3086 | 0.802 | 0.8231 | 0.3269 | 0.8434 | 0.8469 | 0.3839 | 0.8449 | 0.8575 | 0.7948 | 0.8435 | 0.7573 | 0.7964 | 0.8721 | 0.9009 |
| 0.344 | 78.0 | 78000 | 0.2863 | 0.8003 | 0.9617 | 0.9286 | 0.2783 | 0.793 | 0.8183 | 0.3227 | 0.8338 | 0.8376 | 0.3869 | 0.8327 | 0.8544 | 0.785 | 0.8336 | 0.751 | 0.7866 | 0.8648 | 0.8926 |
| 0.339 | 79.0 | 79000 | 0.2687 | 0.8186 | 0.9613 | 0.9276 | 0.2672 | 0.8113 | 0.8268 | 0.3303 | 0.8499 | 0.8542 | 0.3919 | 0.85 | 0.8578 | 0.8079 | 0.8519 | 0.7733 | 0.8098 | 0.8747 | 0.901 |
| 0.2642 | 80.0 | 80000 | 0.2587 | 0.8227 | 0.9658 | 0.9254 | 0.2459 | 0.8182 | 0.8269 | 0.3321 | 0.8543 | 0.8583 | 0.39 | 0.8576 | 0.858 | 0.8123 | 0.8551 | 0.78 | 0.8165 | 0.8757 | 0.9033 |
| 0.3122 | 81.0 | 81000 | 0.2692 | 0.8148 | 0.9609 | 0.9193 | 0.2916 | 0.8076 | 0.8143 | 0.3303 | 0.8479 | 0.8517 | 0.3864 | 0.8495 | 0.851 | 0.8096 | 0.8557 | 0.7596 | 0.7974 | 0.8751 | 0.902 |
| 0.3475 | 82.0 | 82000 | 0.2805 | 0.797 | 0.9577 | 0.9257 | 0.2179 | 0.789 | 0.816 | 0.3231 | 0.8329 | 0.8366 | 0.3324 | 0.831 | 0.8536 | 0.7938 | 0.8421 | 0.7365 | 0.7789 | 0.8607 | 0.8888 |
| 0.4223 | 83.0 | 83000 | 0.2652 | 0.8082 | 0.9656 | 0.9314 | 0.2965 | 0.8025 | 0.8239 | 0.3252 | 0.8455 | 0.8485 | 0.3701 | 0.8477 | 0.8643 | 0.797 | 0.8447 | 0.7561 | 0.8005 | 0.8714 | 0.9003 |
| 0.3136 | 84.0 | 84000 | 0.2655 | 0.8157 | 0.9692 | 0.9236 | 0.3004 | 0.8136 | 0.8269 | 0.3272 | 0.85 | 0.8539 | 0.4066 | 0.8524 | 0.8615 | 0.8092 | 0.8573 | 0.7676 | 0.8031 | 0.8703 | 0.9012 |
| 0.2849 | 85.0 | 85000 | 0.2659 | 0.8126 | 0.9654 | 0.9284 | 0.2584 | 0.8039 | 0.8435 | 0.3268 | 0.8465 | 0.85 | 0.3664 | 0.8451 | 0.8762 | 0.7995 | 0.8459 | 0.7634 | 0.7995 | 0.8748 | 0.9045 |
| 0.3634 | 86.0 | 86000 | 0.2642 | 0.8194 | 0.9604 | 0.9231 | 0.2498 | 0.8143 | 0.8379 | 0.3308 | 0.8509 | 0.8538 | 0.2906 | 0.8519 | 0.8711 | 0.8109 | 0.8541 | 0.7685 | 0.8021 | 0.8789 | 0.9051 |
| 0.4086 | 87.0 | 87000 | 0.2655 | 0.8124 | 0.9649 | 0.924 | 0.3069 | 0.8076 | 0.8237 | 0.3279 | 0.8446 | 0.8484 | 0.3824 | 0.845 | 0.8606 | 0.7972 | 0.8421 | 0.7569 | 0.7938 | 0.8832 | 0.9095 |
| 0.3238 | 88.0 | 88000 | 0.2543 | 0.822 | 0.9675 | 0.9315 | 0.2809 | 0.8157 | 0.8393 | 0.3303 | 0.8551 | 0.8586 | 0.3553 | 0.8548 | 0.8723 | 0.8092 | 0.8541 | 0.7755 | 0.8144 | 0.8814 | 0.9073 |
| 0.465 | 89.0 | 89000 | 0.2690 | 0.818 | 0.9661 | 0.9275 | 0.3431 | 0.8111 | 0.8314 | 0.3274 | 0.8482 | 0.8524 | 0.4379 | 0.8484 | 0.8649 | 0.7979 | 0.8396 | 0.7725 | 0.8046 | 0.8835 | 0.913 |
| 0.37 | 90.0 | 90000 | 0.2602 | 0.8221 | 0.9593 | 0.9235 | 0.2761 | 0.8197 | 0.8417 | 0.3301 | 0.8531 | 0.8564 | 0.3389 | 0.8569 | 0.8744 | 0.8138 | 0.8584 | 0.7712 | 0.801 | 0.8813 | 0.9098 |
| 0.3063 | 91.0 | 91000 | 0.2617 | 0.8144 | 0.9619 | 0.9289 | 0.3302 | 0.8134 | 0.8147 | 0.3274 | 0.8485 | 0.8515 | 0.4172 | 0.8521 | 0.8532 | 0.8069 | 0.8575 | 0.7583 | 0.7902 | 0.8779 | 0.9068 |
| 0.2721 | 92.0 | 92000 | 0.2699 | 0.8123 | 0.961 | 0.9226 | 0.2827 | 0.8072 | 0.8225 | 0.3289 | 0.8461 | 0.8492 | 0.3659 | 0.848 | 0.86 | 0.8041 | 0.8467 | 0.7592 | 0.8 | 0.8736 | 0.9009 |
| 0.2704 | 93.0 | 93000 | 0.2531 | 0.8251 | 0.9587 | 0.9243 | 0.3119 | 0.8192 | 0.8461 | 0.3351 | 0.8579 | 0.8611 | 0.3758 | 0.8575 | 0.8786 | 0.8164 | 0.864 | 0.7768 | 0.8113 | 0.882 | 0.908 |
| 0.3274 | 94.0 | 94000 | 0.2599 | 0.8229 | 0.968 | 0.9292 | 0.3198 | 0.8176 | 0.8409 | 0.3306 | 0.8551 | 0.8583 | 0.4022 | 0.8564 | 0.8716 | 0.8147 | 0.8602 | 0.7744 | 0.8082 | 0.8797 | 0.9066 |
| 0.3198 | 95.0 | 95000 | 0.2561 | 0.8264 | 0.9661 | 0.9288 | 0.303 | 0.8232 | 0.8412 | 0.3327 | 0.859 | 0.8625 | 0.3882 | 0.8619 | 0.8739 | 0.8153 | 0.8628 | 0.7869 | 0.8191 | 0.8768 | 0.9057 |
| 0.3286 | 96.0 | 96000 | 0.2624 | 0.8178 | 0.9671 | 0.9258 | 0.3039 | 0.8123 | 0.8334 | 0.3303 | 0.8535 | 0.8567 | 0.3876 | 0.8552 | 0.87 | 0.817 | 0.8624 | 0.7705 | 0.8082 | 0.8658 | 0.8996 |
| 0.35 | 97.0 | 97000 | 0.2436 | 0.8304 | 0.9659 | 0.9291 | 0.3281 | 0.8274 | 0.8375 | 0.3336 | 0.866 | 0.8697 | 0.4291 | 0.8701 | 0.8707 | 0.8168 | 0.8644 | 0.7886 | 0.8299 | 0.8857 | 0.9147 |
| 0.3377 | 98.0 | 98000 | 0.2626 | 0.819 | 0.9551 | 0.921 | 0.294 | 0.8177 | 0.819 | 0.3325 | 0.8525 | 0.8559 | 0.3603 | 0.8563 | 0.8551 | 0.8114 | 0.8588 | 0.7658 | 0.8005 | 0.8798 | 0.9083 |
| 0.3617 | 99.0 | 99000 | 0.2673 | 0.8161 | 0.9609 | 0.9234 | 0.2854 | 0.8086 | 0.8305 | 0.3311 | 0.8506 | 0.8545 | 0.3632 | 0.8498 | 0.87 | 0.8023 | 0.8485 | 0.7725 | 0.8134 | 0.8736 | 0.9016 |
| 0.364 | 100.0 | 100000 | 0.2605 | 0.824 | 0.9626 | 0.9338 | 0.323 | 0.8195 | 0.8453 | 0.3306 | 0.8547 | 0.8589 | 0.4057 | 0.8573 | 0.8753 | 0.8203 | 0.861 | 0.7746 | 0.8093 | 0.8771 | 0.9064 |
| 0.3617 | 101.0 | 101000 | 0.2504 | 0.8272 | 0.9633 | 0.9247 | 0.2991 | 0.8225 | 0.8427 | 0.3337 | 0.859 | 0.8635 | 0.3882 | 0.8625 | 0.8814 | 0.8188 | 0.862 | 0.7768 | 0.8139 | 0.886 | 0.9146 |
| 0.2855 | 102.0 | 102000 | 0.2508 | 0.8213 | 0.9647 | 0.9329 | 0.3075 | 0.8192 | 0.8351 | 0.3294 | 0.8561 | 0.86 | 0.392 | 0.8619 | 0.8727 | 0.8165 | 0.865 | 0.7631 | 0.8031 | 0.8843 | 0.9118 |
| 0.3384 | 103.0 | 103000 | 0.2512 | 0.828 | 0.9619 | 0.934 | 0.3208 | 0.8276 | 0.8439 | 0.3343 | 0.8606 | 0.8646 | 0.4164 | 0.8657 | 0.8757 | 0.8158 | 0.8602 | 0.7848 | 0.8216 | 0.8833 | 0.9119 |
| 0.3331 | 104.0 | 104000 | 0.2545 | 0.8195 | 0.9621 | 0.9333 | 0.3061 | 0.8183 | 0.8263 | 0.331 | 0.8535 | 0.8564 | 0.4039 | 0.8573 | 0.8602 | 0.813 | 0.8569 | 0.7652 | 0.8052 | 0.8803 | 0.9071 |
| 0.3158 | 105.0 | 105000 | 0.2531 | 0.8304 | 0.9634 | 0.9254 | 0.2998 | 0.8263 | 0.8411 | 0.3356 | 0.8632 | 0.8671 | 0.3968 | 0.865 | 0.8794 | 0.8222 | 0.867 | 0.7788 | 0.8196 | 0.89 | 0.9148 |
| 0.301 | 106.0 | 106000 | 0.2596 | 0.8221 | 0.961 | 0.9276 | 0.2968 | 0.8191 | 0.8335 | 0.3325 | 0.8551 | 0.8587 | 0.3629 | 0.8586 | 0.8703 | 0.8082 | 0.8543 | 0.7767 | 0.8149 | 0.8815 | 0.9067 |
| 0.3579 | 107.0 | 107000 | 0.2434 | 0.8339 | 0.9647 | 0.9304 | 0.3098 | 0.832 | 0.8464 | 0.3344 | 0.8641 | 0.8685 | 0.3989 | 0.8705 | 0.877 | 0.8228 | 0.8678 | 0.7883 | 0.8227 | 0.8905 | 0.915 |
| 0.3682 | 108.0 | 108000 | 0.2440 | 0.8324 | 0.9624 | 0.9341 | 0.3185 | 0.8311 | 0.8428 | 0.3348 | 0.8644 | 0.8681 | 0.404 | 0.8697 | 0.8751 | 0.8208 | 0.8648 | 0.7831 | 0.8227 | 0.8932 | 0.9167 |
| 0.3234 | 109.0 | 109000 | 0.2532 | 0.8224 | 0.9605 | 0.9338 | 0.3491 | 0.8203 | 0.8253 | 0.3316 | 0.8569 | 0.8609 | 0.4301 | 0.8603 | 0.8643 | 0.8134 | 0.8622 | 0.7699 | 0.8108 | 0.884 | 0.9098 |
| 0.3412 | 110.0 | 110000 | 0.2400 | 0.8375 | 0.966 | 0.9346 | 0.3187 | 0.8398 | 0.8387 | 0.3368 | 0.8692 | 0.8733 | 0.4148 | 0.878 | 0.8725 | 0.829 | 0.8722 | 0.7912 | 0.8309 | 0.8923 | 0.9167 |
| 0.4866 | 111.0 | 111000 | 0.2558 | 0.8248 | 0.9649 | 0.9282 | 0.3024 | 0.8222 | 0.8365 | 0.3305 | 0.8537 | 0.8579 | 0.3818 | 0.858 | 0.8692 | 0.8161 | 0.8596 | 0.7728 | 0.8052 | 0.8855 | 0.9089 |
| 0.2781 | 112.0 | 112000 | 0.2461 | 0.8324 | 0.9616 | 0.9294 | 0.3114 | 0.8333 | 0.8408 | 0.3349 | 0.8623 | 0.8659 | 0.3745 | 0.8679 | 0.8706 | 0.8247 | 0.8686 | 0.7794 | 0.8124 | 0.893 | 0.9166 |
| 0.3233 | 113.0 | 113000 | 0.2467 | 0.8333 | 0.9634 | 0.9328 | 0.3187 | 0.8308 | 0.8462 | 0.3342 | 0.865 | 0.8689 | 0.4206 | 0.8696 | 0.8788 | 0.8201 | 0.867 | 0.7881 | 0.8222 | 0.8917 | 0.9176 |
| 0.2915 | 114.0 | 114000 | 0.2393 | 0.8366 | 0.9605 | 0.9276 | 0.3392 | 0.8387 | 0.8412 | 0.336 | 0.8677 | 0.8707 | 0.3934 | 0.8751 | 0.8747 | 0.8273 | 0.8734 | 0.79 | 0.8216 | 0.8925 | 0.917 |
| 0.3298 | 115.0 | 115000 | 0.2474 | 0.8323 | 0.9637 | 0.9268 | 0.3252 | 0.8334 | 0.8287 | 0.3324 | 0.8629 | 0.8667 | 0.4067 | 0.8709 | 0.8656 | 0.8222 | 0.8672 | 0.7779 | 0.8129 | 0.8966 | 0.9199 |
| 0.3928 | 116.0 | 116000 | 0.2425 | 0.8386 | 0.9669 | 0.9343 | 0.3302 | 0.8384 | 0.8459 | 0.336 | 0.8695 | 0.8739 | 0.4186 | 0.8764 | 0.879 | 0.8272 | 0.872 | 0.7927 | 0.8284 | 0.896 | 0.9213 |
| 0.3156 | 117.0 | 117000 | 0.2514 | 0.8285 | 0.9597 | 0.9311 | 0.3177 | 0.8317 | 0.827 | 0.3333 | 0.8605 | 0.8645 | 0.392 | 0.8681 | 0.8634 | 0.8205 | 0.8646 | 0.7744 | 0.8119 | 0.8907 | 0.917 |
| 0.3184 | 118.0 | 118000 | 0.2497 | 0.8294 | 0.9636 | 0.9301 | 0.3286 | 0.8321 | 0.8352 | 0.3334 | 0.8631 | 0.8669 | 0.3995 | 0.8691 | 0.8738 | 0.8231 | 0.8672 | 0.7759 | 0.817 | 0.8892 | 0.9166 |
| 0.2561 | 119.0 | 119000 | 0.2440 | 0.8339 | 0.963 | 0.929 | 0.3126 | 0.8324 | 0.8349 | 0.3359 | 0.8661 | 0.8701 | 0.3844 | 0.8711 | 0.8724 | 0.8266 | 0.8702 | 0.785 | 0.8232 | 0.8901 | 0.917 |
| 0.2776 | 120.0 | 120000 | 0.2442 | 0.8358 | 0.9622 | 0.9283 | 0.3166 | 0.8371 | 0.835 | 0.3366 | 0.8677 | 0.8717 | 0.3809 | 0.8759 | 0.8706 | 0.8278 | 0.8724 | 0.7874 | 0.8242 | 0.8921 | 0.9183 |
| 0.2591 | 121.0 | 121000 | 0.2473 | 0.8302 | 0.9627 | 0.93 | 0.3066 | 0.8283 | 0.8306 | 0.3334 | 0.8615 | 0.8653 | 0.3879 | 0.8663 | 0.8695 | 0.819 | 0.863 | 0.7774 | 0.8144 | 0.8942 | 0.9185 |
| 0.3241 | 122.0 | 122000 | 0.2518 | 0.831 | 0.9615 | 0.9294 | 0.3046 | 0.8319 | 0.8359 | 0.3329 | 0.8637 | 0.8668 | 0.3782 | 0.8693 | 0.8747 | 0.82 | 0.8662 | 0.7825 | 0.818 | 0.8905 | 0.9163 |
| 0.3599 | 123.0 | 123000 | 0.2362 | 0.84 | 0.9602 | 0.9317 | 0.3253 | 0.8382 | 0.8521 | 0.3385 | 0.8713 | 0.8747 | 0.3742 | 0.8767 | 0.8869 | 0.8325 | 0.8775 | 0.7897 | 0.8253 | 0.8978 | 0.9214 |
| 0.2938 | 124.0 | 124000 | 0.2403 | 0.84 | 0.9615 | 0.935 | 0.3254 | 0.8403 | 0.8457 | 0.337 | 0.8711 | 0.8742 | 0.3856 | 0.8781 | 0.8804 | 0.8314 | 0.8753 | 0.7929 | 0.8263 | 0.8958 | 0.921 |
| 0.2533 | 125.0 | 125000 | 0.2422 | 0.8363 | 0.9619 | 0.9322 | 0.3405 | 0.8376 | 0.8377 | 0.3355 | 0.8677 | 0.8709 | 0.4067 | 0.8744 | 0.8757 | 0.8245 | 0.8686 | 0.7883 | 0.8232 | 0.8962 | 0.9208 |
| 0.3822 | 126.0 | 126000 | 0.2427 | 0.8376 | 0.9645 | 0.9307 | 0.3178 | 0.8354 | 0.8477 | 0.3366 | 0.8695 | 0.8726 | 0.4005 | 0.8739 | 0.8793 | 0.8244 | 0.8696 | 0.7953 | 0.8299 | 0.8932 | 0.9182 |
| 0.3135 | 127.0 | 127000 | 0.2462 | 0.8335 | 0.9645 | 0.9305 | 0.3185 | 0.8314 | 0.8431 | 0.3348 | 0.8659 | 0.8691 | 0.4007 | 0.8697 | 0.8765 | 0.8224 | 0.8674 | 0.7884 | 0.8242 | 0.8898 | 0.9156 |
| 0.4718 | 128.0 | 128000 | 0.2414 | 0.8367 | 0.9644 | 0.9302 | 0.3142 | 0.8356 | 0.8441 | 0.3355 | 0.8685 | 0.8717 | 0.3959 | 0.8733 | 0.8785 | 0.8257 | 0.8702 | 0.7906 | 0.8258 | 0.8938 | 0.9191 |
| 0.2618 | 129.0 | 129000 | 0.2431 | 0.8372 | 0.9628 | 0.9374 | 0.3165 | 0.8355 | 0.8481 | 0.336 | 0.8684 | 0.8716 | 0.39 | 0.8742 | 0.88 | 0.8263 | 0.8688 | 0.7917 | 0.8268 | 0.8936 | 0.9191 |
| 0.3085 | 130.0 | 130000 | 0.2409 | 0.8389 | 0.9647 | 0.9338 | 0.3262 | 0.8395 | 0.8435 | 0.3365 | 0.8714 | 0.8748 | 0.4021 | 0.8795 | 0.8782 | 0.8298 | 0.8738 | 0.7931 | 0.8299 | 0.8938 | 0.9207 |
| 0.3682 | 131.0 | 131000 | 0.2402 | 0.8385 | 0.9641 | 0.9329 | 0.3243 | 0.8393 | 0.8421 | 0.3366 | 0.8699 | 0.8734 | 0.3914 | 0.8773 | 0.8787 | 0.8265 | 0.871 | 0.7923 | 0.8278 | 0.8966 | 0.9213 |
| 0.2999 | 132.0 | 132000 | 0.2411 | 0.8374 | 0.9641 | 0.9296 | 0.3115 | 0.8377 | 0.8448 | 0.3362 | 0.869 | 0.8722 | 0.3776 | 0.8761 | 0.8773 | 0.8243 | 0.8702 | 0.7922 | 0.8263 | 0.8957 | 0.9201 |
| 0.3393 | 133.0 | 133000 | 0.2399 | 0.839 | 0.9641 | 0.9345 | 0.3157 | 0.838 | 0.847 | 0.3372 | 0.87 | 0.8732 | 0.3913 | 0.8758 | 0.8797 | 0.8276 | 0.8706 | 0.7974 | 0.8309 | 0.8919 | 0.918 |
| 0.3064 | 134.0 | 134000 | 0.2377 | 0.8417 | 0.964 | 0.9382 | 0.3154 | 0.8393 | 0.8561 | 0.3387 | 0.8727 | 0.8754 | 0.3835 | 0.8772 | 0.8869 | 0.8329 | 0.8769 | 0.7981 | 0.8304 | 0.8943 | 0.9191 |
| 0.2612 | 135.0 | 135000 | 0.2375 | 0.8423 | 0.9644 | 0.9339 | 0.3126 | 0.842 | 0.849 | 0.3391 | 0.8727 | 0.8761 | 0.3803 | 0.8801 | 0.8812 | 0.8308 | 0.8748 | 0.7974 | 0.8314 | 0.8986 | 0.922 |
| 0.2906 | 136.0 | 136000 | 0.2385 | 0.8392 | 0.9641 | 0.9341 | 0.3276 | 0.8415 | 0.8443 | 0.3364 | 0.8705 | 0.874 | 0.3938 | 0.8796 | 0.8775 | 0.8299 | 0.8742 | 0.7909 | 0.8258 | 0.8969 | 0.9218 |
| 0.2954 | 137.0 | 137000 | 0.2363 | 0.8422 | 0.9641 | 0.9382 | 0.3286 | 0.8414 | 0.8493 | 0.3381 | 0.8725 | 0.8764 | 0.4059 | 0.8793 | 0.8806 | 0.8326 | 0.8757 | 0.7944 | 0.8304 | 0.8996 | 0.9231 |
| 0.304 | 138.0 | 138000 | 0.2413 | 0.8374 | 0.9641 | 0.934 | 0.3224 | 0.8376 | 0.8419 | 0.3362 | 0.8684 | 0.872 | 0.3962 | 0.8749 | 0.8758 | 0.8264 | 0.8712 | 0.7891 | 0.8253 | 0.8966 | 0.9195 |
| 0.2716 | 139.0 | 139000 | 0.2420 | 0.8401 | 0.964 | 0.9343 | 0.3285 | 0.8406 | 0.8493 | 0.3376 | 0.8709 | 0.8745 | 0.3978 | 0.8779 | 0.88 | 0.8264 | 0.8716 | 0.7967 | 0.8309 | 0.8971 | 0.9208 |
| 0.3027 | 140.0 | 140000 | 0.2401 | 0.8416 | 0.9644 | 0.9344 | 0.3279 | 0.8409 | 0.8482 | 0.3375 | 0.8715 | 0.8755 | 0.3965 | 0.8797 | 0.8787 | 0.8312 | 0.874 | 0.795 | 0.8304 | 0.8985 | 0.922 |
| 0.2667 | 141.0 | 141000 | 0.2400 | 0.8399 | 0.9641 | 0.9341 | 0.3252 | 0.8405 | 0.8463 | 0.3374 | 0.8704 | 0.8743 | 0.3943 | 0.8781 | 0.8801 | 0.8283 | 0.8728 | 0.7945 | 0.8294 | 0.8968 | 0.9208 |
| 0.2245 | 142.0 | 142000 | 0.2408 | 0.8404 | 0.9641 | 0.9337 | 0.3176 | 0.8406 | 0.8463 | 0.3371 | 0.8702 | 0.874 | 0.3929 | 0.8783 | 0.878 | 0.8276 | 0.8716 | 0.797 | 0.8304 | 0.8967 | 0.9201 |
| 0.3448 | 143.0 | 143000 | 0.2394 | 0.8421 | 0.9641 | 0.9344 | 0.3271 | 0.8427 | 0.851 | 0.3385 | 0.8725 | 0.8763 | 0.3978 | 0.8804 | 0.8815 | 0.8322 | 0.8753 | 0.7968 | 0.8325 | 0.8974 | 0.9213 |
| 0.3681 | 144.0 | 144000 | 0.2401 | 0.8413 | 0.9642 | 0.9341 | 0.3267 | 0.8413 | 0.8491 | 0.3377 | 0.8713 | 0.8749 | 0.396 | 0.879 | 0.8802 | 0.8309 | 0.8738 | 0.7958 | 0.8299 | 0.8974 | 0.921 |
| 0.2593 | 145.0 | 145000 | 0.2393 | 0.8414 | 0.9641 | 0.9344 | 0.3271 | 0.8411 | 0.8522 | 0.3378 | 0.8713 | 0.875 | 0.3947 | 0.879 | 0.8821 | 0.8313 | 0.8742 | 0.7955 | 0.8294 | 0.8973 | 0.9213 |
| 0.4266 | 146.0 | 146000 | 0.2398 | 0.8403 | 0.9641 | 0.934 | 0.3289 | 0.8405 | 0.8493 | 0.3368 | 0.8705 | 0.8745 | 0.4008 | 0.8786 | 0.8799 | 0.8302 | 0.8728 | 0.7933 | 0.8289 | 0.8974 | 0.9218 |
| 0.2411 | 147.0 | 147000 | 0.2392 | 0.8412 | 0.9641 | 0.934 | 0.3283 | 0.8403 | 0.8507 | 0.3374 | 0.8709 | 0.8747 | 0.3978 | 0.8787 | 0.8811 | 0.8315 | 0.8736 | 0.7949 | 0.8289 | 0.8973 | 0.9217 |
| 0.2675 | 148.0 | 148000 | 0.2393 | 0.8412 | 0.9641 | 0.934 | 0.3268 | 0.8407 | 0.8507 | 0.3374 | 0.871 | 0.8748 | 0.3947 | 0.879 | 0.881 | 0.8315 | 0.8738 | 0.795 | 0.8289 | 0.8973 | 0.9215 |
| 0.2945 | 149.0 | 149000 | 0.2394 | 0.8413 | 0.9641 | 0.9341 | 0.3268 | 0.8408 | 0.8507 | 0.3376 | 0.8711 | 0.8749 | 0.3947 | 0.8792 | 0.881 | 0.8309 | 0.8738 | 0.7956 | 0.8294 | 0.8973 | 0.9215 |
| 0.2848 | 150.0 | 150000 | 0.2394 | 0.8413 | 0.9641 | 0.9341 | 0.3268 | 0.8408 | 0.8507 | 0.3376 | 0.8711 | 0.8749 | 0.3947 | 0.8792 | 0.881 | 0.8309 | 0.8738 | 0.7956 | 0.8294 | 0.8973 | 0.9215 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu121
- Datasets 2.19.2
- Tokenizers 0.20.1
|
Styxxxx/llama2_7b_lora-snli
|
Styxxxx
| 2024-11-01T06:19:43Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T06:19:35Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
donghocha/starling-7b-raft-ft
|
donghocha
| 2024-11-01T06:14:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-31T04:47:36Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Styxxxx/llama2_7b_lora-anli_r2
|
Styxxxx
| 2024-11-01T05:57:06Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-11-01T05:17:23Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Givemeaname123/nomoney_80
|
Givemeaname123
| 2024-11-01T05:45:48Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T05:42:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jonathanjordan21/test-qwen-summary
|
jonathanjordan21
| 2024-11-01T05:30:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T05:08:28Z |
---
base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** jonathanjordan21
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shevek/segformer-b0-finetuned-test
|
shevek
| 2024-11-01T05:27:37Z | 202 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-10-25T02:55:10Z |
---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-test
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2053
- eval_mean_iou: 0.5448
- eval_mean_accuracy: 0.6296
- eval_overall_accuracy: 0.9130
- eval_accuracy_Structure (dimensional): nan
- eval_accuracy_Impervious (planiform): 0.9578
- eval_accuracy_Fences: 0.3758
- eval_accuracy_Water Storage/Tank: nan
- eval_accuracy_Pool < 100 sqft: 0.0
- eval_accuracy_Pool > 100 sqft: 0.8208
- eval_accuracy_Irrigated Planiform: 0.8708
- eval_accuracy_Irrigated Dimensional Low: 0.6817
- eval_accuracy_Irrigated Dimensional High: 0.9472
- eval_accuracy_Irrigated Bare: 0.4827
- eval_accuracy_Irrigable Planiform: 0.6668
- eval_accuracy_Irrigable Dimensional Low: 0.6013
- eval_accuracy_Irrigable Dimensional High: 0.7902
- eval_accuracy_Irrigable Bare: 0.5657
- eval_accuracy_Native Planiform: 0.9093
- eval_accuracy_Native Dimensional Low: 0.0
- eval_accuracy_Native Dimensional High: 0.0961
- eval_accuracy_Native Bare: 0.9332
- eval_accuracy_UDL: nan
- eval_accuracy_Open Water: 0.6613
- eval_accuracy_Artificial Turf: 0.9720
- eval_iou_Structure (dimensional): 0.0
- eval_iou_Impervious (planiform): 0.8964
- eval_iou_Fences: 0.3104
- eval_iou_Water Storage/Tank: nan
- eval_iou_Pool < 100 sqft: 0.0
- eval_iou_Pool > 100 sqft: 0.8199
- eval_iou_Irrigated Planiform: 0.7563
- eval_iou_Irrigated Dimensional Low: 0.5480
- eval_iou_Irrigated Dimensional High: 0.8920
- eval_iou_Irrigated Bare: 0.4053
- eval_iou_Irrigable Planiform: 0.6007
- eval_iou_Irrigable Dimensional Low: 0.5083
- eval_iou_Irrigable Dimensional High: 0.7595
- eval_iou_Irrigable Bare: 0.5106
- eval_iou_Native Planiform: 0.8678
- eval_iou_Native Dimensional Low: 0.0
- eval_iou_Native Dimensional High: 0.0961
- eval_iou_Native Bare: 0.8293
- eval_iou_UDL: nan
- eval_iou_Open Water: 0.5929
- eval_iou_Artificial Turf: 0.9584
- eval_runtime: 6.2852
- eval_samples_per_second: 15.91
- eval_steps_per_second: 1.114
- epoch: 10.8
- step: 270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
Keltezaa/alicia-vikander-sdxl-flux
|
Keltezaa
| 2024-11-01T05:18:17Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"female",
"woman",
"actress",
"celebrity",
"alicia vikander",
"sdxl",
"flux1.d",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-01T05:18:16Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- female
- woman
- actress
- celebrity
- alicia vikander
- sdxl
- flux1.d
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alicia_vikander
widget:
- text: 'alicia_vikander A vivid portrayal of a young with striking blue eyes and freckles, wrapped in a fluffy white fur coat with a hood. She holds a white wolf with piercing blue eyes in her arms. The background is a deep blue with bokeh effects, creating a dreamy atmosphere. The ''s pose is serene, with her head resting gently on the wolf''s neck. The image has a fantasy and ethereal style, emphasizing the interplay of light and shadow.'
output:
url: >-
27575000.jpeg
- text: 'alicia_vikander A portrait of a young woman in a natural setting. She leans against a tree trunk, her face partially hidden by the bark. Her blonde hair flows freely, and she wears a white sleeveless top. Soft, natural light illuminates her face, highlighting her features and the texture of the tree bark. The background is blurred, highlighting the subject. The image is candid and serene, capturing a moment of tranquility and connection with nature. '
output:
url: >-
27575032.jpeg
- text: 'A close-up portrait of alicia_vikander in a rainy environment. She has wet hair and wears a green sleeveless top with a leather strap around her neck. Her intense gaze is directed to the side, conveying determination. The background is a blend of dark and light colors, depicting heavy rain and greenery. Water droplets cover her skin, indicating recent rainfall. The image style is cinematic, emphasizing dramatic lighting and shadow play.'
output:
url: >-
27575059.jpeg
- text: 'alicia_vikander, A portrait of a woman with long, wavy blonde hair, illuminated by soft, golden light. She has a contemplative expression, resting her chin on her hand, and her gaze is directed slightly away from the camera. The background is blurred, highlighting the subject. She wears a black top and has multiple tattoos on her arms, including one on her left shoulder and another on her right shoulder. The woman''s hair is styled in loose waves, with strands cascading down her back, creating a sense of movement and energy. The image has a dreamy, ethereal quality, emphasizing the subject''s facial features and hair texture. '
output:
url: >-
27575086.jpeg
- text: 'A cinematic portrayal of a female warrior in the foreground. She has long, wavy brown hair and wears a detailed, metallic armor with a red strap across her chest. Her intense gaze is directed to the side. In the background, a group of soldiers in full combat gear stand, one holding a flaming spear. The scene is set against a backdrop of rugged terrain and a fiery atmosphere. The color palette is dominated by earthy tones, with the red strap adding a pop of color.'
output:
url: >-
27575108.jpeg
- text: 'A vibrant outdoor portrait of alicia_vikander captured in a candid moment. She stands in a field of yellow flowers, with a clear blue sky in the background. She wears a white sleeveless dress, a brown vest, and a multi-layered necklace with a circular pendant. Her long, wavy brown hair flows down her back, and she wears round sunglasses with reflective lenses. Her arms are raised above her head, and her fingers are delicately placed on her forehead. The woman''s tattoos are visible on her arms, including one on her left arm and another on her right wrist. The overall style of the image is bohemian, evoking feelings of freedom and tranquility.,'
output:
url: >-
27575117.jpeg
- text: 'A portrait of alicia_vikander captured in a candid moment. She has long, wavy brown hair and striking brown eyes. She wears a white turtleneck sweater and a brown, fluffy fur vest over a black jacket. A silver necklace with a circular pendant hangs around her neck. She adjusts her hair with one hand, and a gold watch is on her wrist. The blurred background features bokeh lights, indicating it is a professional photograph in an urban setting,'
output:
url: >-
27575147.jpeg
- text: 'alicia_vikander The image shows a woman with long brown hair wearing a green sweater against a dark background, captured in a close-up portrait.'
output:
url: >-
27575173.jpeg
- text: 'A portrait of alicia_vikander standing on a city street. She wears a white blouse with a lace-up front and dark trousers. Her hair is pulled back into a ponytail, and she carries a black handbag with a gold chain strap. The alicia_vikander looks directly at the camera with a neutral expression. The background shows a bustling urban scene with pedestrians, buildings, and street lamps. The image style is candid and natural, capturing a moment in time. '
output:
url: >-
27575185.jpeg
- text: 'A portrait of alicia_vikander with long, straight blonde hair and striking brown eyes. She wears a pink blouse with black polka dots. The dark background contrasts with her pale complexion, highlighting her features. Soft lighting casts a gentle glow on her face and highlighting her facial features without harsh shadows. The image conveys a serene and contemplative mood. '
output:
url: >-
27575206.jpeg
- text: 'A monochromatic portrait of alicia_vikander, captured in a side profile. She wears a wide-brimmed hat, and her hair is styled in loose waves. Her makeup is subtle, emphasizing her eyes and lips. She is dressed in a dark, tailored jacket. The background is a deep, dark shade, contrasting with the lighter tones of her face and clothing. The image style is dramatic and evocative, emphasizing the subject''s features and expressions.'
output:
url: >-
27575230.jpeg
- text: 'A close-up portrait of alicia_vikander in a rainy environment. She has wet hair and wears a green sleeveless top with a leather strap around her neck. Her intense gaze is directed to the side, conveying determination. The background is a blend of dark and light colors, depicting heavy rain and greenery. Water droplets cover her skin, indicating recent rainfall. The image style is cinematic, emphasizing dramatic lighting and shadow play, reminiscent of the movie tomb raider '
output:
url: >-
27575248.jpeg
- text: 'A close-up portrait of alicia_vikander in a dimly lit, vintage setting. She wears a dark suit with a white shirt and a patterned red tie, looking like a experienced mafia boss from the godfather movie. Her hair is styled in a high ponytail, and she has a serious expression. The background shows a wooden door, a man in a military uniform, and several other individuals. The color palette consists primarily of dark tones, with the woman''s red tie contrasting against the muted background. The image evokes a somber and introspective mood.,'
output:
url: >-
27575288.jpeg
- text: 'A detailed, realistic portrait of alicia_vikander in a wet urban setting. She has a contemplative expression, her gaze directed away from the viewer. Her hair is wet, with droplets of water cascading down her face. She wears a dark, worn-out jacket. The background shows a city street with buildings, debris, and a visible fire. The color palette is muted, dominated by earthy browns, grays, and muted blues. The style resembles a professional photography, with meticulous attention to detail in the textures and colors of the clothing and surroundings.'
output:
url: >-
27575294.jpeg
- text: 'A fantasy-themed portrait of alicia_vikander standing in a dense forest. She has dark brown hair, dark eyes, and a stern expression. She wears a white blouse, brown leather corset, and black leather pants. She holds a silver dagger in her right hand and wears leather gloves. The background is a blend of green trees and mist, creating a serene and mystical atmosphere. The color palette consists primarily of greens, browns, and whites, with the woman''s skin and hair providing a stark contrast. '
output:
url: >-
27575300.jpeg
- text: 'alicia_vikander in a beige, textured jacket sits at a wooden table in a snowy outdoor setting. She holds a white mug filled with hot coffee and speaks on her mobile phone while looking at the viewer. Snow covers the ground and her hair. The background shows a wooden fence, a snow-covered structure, and trees. The image conveys a contemplative and serene mood.'
output:
url: >-
27575342.jpeg
- text: 'A portrait of alicia_vikander with a striking mohawk hairstyle, set against a city skyline. She wears a black leather vest over a black top, accessorized with large, dangling earrings and a necklace with a pendant. The woman''s gaze is directly at the camera, and her expression is neutral. The background shows a cityscape with tall buildings and a tall tower. The image style is urban and edgy, emphasizing the woman''s rebellious and rebellious nature.'
output:
url: >-
27575362.jpeg
- text: 'A close-up portrait of alicia_vikander with two braids in her hair. She has fair skin with light freckles and striking brown eyes. She wears a blue sweater with white stripes. The background is a solid turquoise color, contrasting with her skin and hair. The image style is contemporary, emphasizing the subject''s natural beauty and expression.'
output:
url: >-
27575391.jpeg
- text: 'alicia_vikander, A portrait of a woman standing on a cobblestone street in an urban setting. She wears a teal dress with intricate gold embroidery and a high collar. Her hair is styled in loose waves and she wears a choker necklace. The background shows a bustling market with stalls selling baked goods and people walking by. The color palette is warm, dominated by the teal of her dress and the golden hues of the market lights.'
output:
url: >-
27575407.jpeg
---
# Alicia Vikander SDXL+FLUX
<Gallery />
## Model description
<p>Alicia Amanda Vikander is a Swedish actress, best known globally for playing the British heroine Lara Croft in the 2018 reboot Tomb Raider. </p><p>Trained for SDXL and FLUX1.D</p>
## Trigger words
You should use `alicia_vikander` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/alicia-vikander-sdxl-flux/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/alicia-vikander-sdxl-flux', weight_name='AliciaVikander_F1D.safetensors')
image = pipeline('alicia_vikander, A portrait of a woman standing on a cobblestone street in an urban setting. She wears a teal dress with intricate gold embroidery and a high collar. Her hair is styled in loose waves and she wears a choker necklace. The background shows a bustling market with stalls selling baked goods and people walking by. The color palette is warm, dominated by the teal of her dress and the golden hues of the market lights.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF
|
featherless-ai-quants
| 2024-11-01T04:50:50Z | 7 | 0 | null |
[
"gguf",
"text-generation",
"base_model:MiniMoog/Mergerix-7b-v0.5",
"base_model:quantized:MiniMoog/Mergerix-7b-v0.5",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T04:22:10Z |
---
base_model: MiniMoog/Mergerix-7b-v0.5
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# MiniMoog/Mergerix-7b-v0.5 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MiniMoog-Mergerix-7b-v0.5-GGUF/blob/main/MiniMoog-Mergerix-7b-v0.5-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/LDCC-Hyeogi.04-GGUF
|
mradermacher
| 2024-11-01T04:40:48Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"LDCC/LDCC-SOLAR-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"ko",
"base_model:jumtul/LDCC-Hyeogi.04",
"base_model:quantized:jumtul/LDCC-Hyeogi.04",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T04:20:55Z |
---
base_model: jumtul/LDCC-Hyeogi.04
language:
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jumtul/LDCC-Hyeogi.04
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q2_K.gguf) | Q2_K | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q3_K_M.gguf) | Q3_K_M | 5.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.IQ4_XS.gguf) | IQ4_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q4_K_S.gguf) | Q4_K_S | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q5_K_S.gguf) | Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q5_K_M.gguf) | Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q6_K.gguf) | Q6_K | 9.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LDCC-Hyeogi.04-GGUF/resolve/main/LDCC-Hyeogi.04.f16.gguf) | f16 | 21.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kiranshivaraju/convnext2-tiny-finetuned-pcb_data
|
kiranshivaraju
| 2024-11-01T04:36:20Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"convnextv2",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-01T04:36:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tnwjd68317/v2_qwen2_lora
|
tnwjd68317
| 2024-11-01T04:36:20Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"krx",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-31T09:46:57Z |
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lightsout19/gpt2-rte
|
lightsout19
| 2024-11-01T04:35:27Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T04:30:40Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-rte
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6616
- Accuracy: 0.6354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 78 | 0.7371 | 0.4621 |
| No log | 2.0 | 156 | 0.6927 | 0.5668 |
| No log | 3.0 | 234 | 0.6831 | 0.5884 |
| No log | 4.0 | 312 | 0.6574 | 0.6282 |
| No log | 5.0 | 390 | 0.6616 | 0.6354 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
stackofsugar/mentallongformer-cams-finetuned
|
stackofsugar
| 2024-11-01T04:33:33Z | 122 | 1 |
transformers
|
[
"transformers",
"safetensors",
"longformer",
"text-classification",
"en",
"base_model:AIMH/mental-longformer-base-4096",
"base_model:finetune:AIMH/mental-longformer-base-4096",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-30T16:19:42Z |
---
base_model:
- AIMH/mental-longformer-base-4096
language:
- en
library_name: transformers
license: mit
metrics:
- name: F1 Score
type: f1
value: 0.5524
verified: false
- name: Accuracy
type: accuracy
value: 0.6064
verified: false
- name: Precision
type: precision
value: 0.602
verified: false
- name: Recall
type: recall
value: 0.5385
verified: false
pipeline_tag: text-classification
---
# About This Model
This model is fine-tuned from the checkpoint of [AIMH/mental-longformer-base-4096](https://huggingface.co/AIMH/mental-longformer-base-4096) using [drmuskangarg/CAMS](https://github.com/drmuskangarg/CAMS/) dataset. For more information about the base Longformer model, please visit their [model page](https://huggingface.co/allenai/longformer-base-4096). We used the same configuration as `AIMH/mental-longformer-base-4096` including their tokenizer.
# Usage
If you wish to use my model to infer your dataset or maybe pre-train it further, you can import my model in a Python script/notebook.
```py
from transformers import LongformerTokenizer, LongformerForSequenceClassification
tokenizer = LongformerTokenizer.from_pretrained("aimh/mental-longformer-base-4096")
model = LongformerForSequenceClassification.from_pretrained("stackofsugar/mentallongformer-cams-finetuned")
```
If you prefer to use the high-level HuggingFace pipeline to make predictions, you can also do it in a Python script/notebook.
```py
from transformers import pipeline
pipe = pipeline("text-classification", model="stackofsugar/mentallongformer-cams-finetuned", tokenizer="aimh/mental-longformer-base-4096")
```
# More Information
For more information, visit my [GitHub Repo](https://github.com/stackofsugar/depression-causal-analysis).
|
yash072/wav2vec2-large-XLSR-Hindi-YashR
|
yash072
| 2024-11-01T04:32:36Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:theainerd/Wav2Vec2-large-xlsr-hindi",
"base_model:finetune:theainerd/Wav2Vec2-large-xlsr-hindi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-23T14:31:50Z |
---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_17_0
- mozilla-foundation/common_voice_13_0
language:
- hi
metrics:
- wer
base_model:
- theainerd/Wav2Vec2-large-xlsr-hindi
pipeline_tag: automatic-speech-recognition
library_name: transformers
---
# Model's Improvment
This model card highlights the improvements from the base model, specifically a reduction in WER from 72% to 54%. This improvement reflects the efficacy of the fine-tuning process on Hindi speech data.
# Wav2Vec2-Large-XLSR-Hindi-Finetuned - Yash_Ratnaker
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the Common Voice 13 and 17 datasets. It is specifically optimized for Hindi speech recognition, with a notable improvement in transcription accuracy, achieving a **Word Error Rate (WER) of 54%**, compared to the base model’s WER of 72% on the same dataset.
## Model description
This Wav2Vec2 model, originally developed by Facebook AI, utilizes self-supervised learning on large unlabeled speech datasets and is then fine-tuned on labeled data. This approach enables the model to learn intricate linguistic features and transcribe speech in Hindi with high accuracy. Fine-tuning on Common Voice Hindi data allows the model to better capture the language's nuances, improving transcription quality.
## Intended uses & limitations
This model is ideal for automatic speech recognition (ASR) applications in Hindi, such as media transcription, accessibility services, and educational content transcription, where audio quality is controlled.
## Usage
The model can be used directly (without a language model) as follows:
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the Hindi Common Voice dataset
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
# Load the processor and model
processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Function to process the dataset
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
# Perform inference
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
# Evaluation
The model can be evaluated as follows on the Hindi test data of Common Voice.
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# Load the dataset and metrics
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
# Initialize processor and model
processor = Wav2Vec2Processor.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model = Wav2Vec2ForCTC.from_pretrained("yash072/wav2vec2-large-xlsr-YashHindi-4")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Function to preprocess the data
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Evaluation function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
### Limitations:
- The model may face challenges with dialectal or regional variations within Hindi.
- Performance can degrade with noisy audio or overlapping speech.
- It is not intended for real-time transcription due to latency considerations.
## Training and evaluation data
The model was fine-tuned on the Hindi portions of the Common Voice 13 and 17 datasets, which contain speech samples from native Hindi speakers. This data captures a range of accents, pronunciations, and recording conditions, enhancing the model’s ability to generalize across different speech patterns. Evaluation was performed on a carefully curated subset, ensuring a reliable benchmark for ASR performance in Hindi.
## Training procedure
### Hyperparameters and setup:
The following hyperparameters were used during training:
- **Learning rate**: 1e-4
- **Batch size**: 16 (per device)
- **Gradient accumulation steps**: 2
- **Evaluation strategy**: steps
- **Max steps**: 2500
- **Mixed precision**: FP16
- **Save steps**: 500
- **Evaluation steps**: 500
- **Logging steps**: 500
- **Warmup steps**: 500
- **Save total limit**: 1
### Training output
- **Global step**: 2500
- **Training runtime**: Approximately 1 hour 21 minutes
- **Epochs**: 5-6
### Training results
| Step | Training Loss | Validation Loss | WER |
|------|---------------|-----------------|--------|
| 500 | 5.603000 | 0.987691 | 0.7556 |
| 1000 | 0.720300 | 0.667561 | 0.6196 |
| 1500 | 0.507000 | 0.592814 | 0.5844 |
| 2000 | 0.431100 | 0.549786 | 0.5439 |
| 2500 | 0.395600 | 0.537703 | 0.5428 |
### Framework versions
Transformers: 4.42.4
PyTorch: 2.3.1+cu121
Datasets: 2.20.0
Tokenizers: 0.19.1
|
iecjsu/Phi-3.5-mini-IT-ORPO
|
iecjsu
| 2024-11-01T04:26:03Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T04:24:09Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** iecjsu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
restor/tcd-segformer-mit-b5
|
restor
| 2024-11-01T04:20:35Z | 542 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"semantic-segmentation",
"vision",
"ecology",
"image-segmentation",
"dataset:restor/tcd",
"arxiv:1910.09700",
"license:cc",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-05-20T11:11:41Z |
---
library_name: transformers
tags:
- semantic-segmentation
- vision
- ecology
datasets:
- restor/tcd
pipeline_tag: image-segmentation
widget:
- src: samples/610160855a90f10006fd303e_10_00418.tif
example_title: Urban scene
license: cc
metrics:
- accuracy
- f1
- iou
---
# Model Card for Restor's SegFormer-based TCD models
This is a semantic segmentation model that can delineate tree cover in high resolution (10 cm/px) aerial images.
This model card is mostly the same for all similar models uploaded to Hugging Face. The model name refers to the specific architecture variant (e.g. nvidia-mit-b0 to nvidia-mit-b5) but the broad details for training and evaluation are identical.
This repository is for `tcd-segformer-mit-b5`
## Citation and contact
**BibTeX:**
This paper was accepted into NeurIPS 2024 under the Datasets and Benchmarks track.
The citation will be updated once the final version is confirmed and the proceedings are online.
```latex
@inproceedings{restortcd,
author = {Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon}
title = {OAM-TCD: A globally diverse dataset of high-resolution tree cover maps},
booktitle = {Advances in Neural Information Processing Systems},
pages = {1--12},
publisher = {Curran Associates, Inc.},
volume = {37},
year = {2024}
```
Please contact josh [at] restor.eco for questions or further information.
## Model Details
### Model Description
This semantic segmentation model was trained on global aerial imagery and is able to accurately delineate tree cover in similar images. The model does not detect individual trees, but provides a per-pixel classification of tree/no-tree.
- **Developed by:** [Restor](https://restor.eco) / [ETH Zurich](https://ethz.ch)
- **Funded by:** This project was made possible via a (Google.org impact grant)[https://blog.google/outreach-initiatives/sustainability/restor-helps-anyone-be-part-ecological-restoration/]
- **Model type:** Semantic segmentation (binary class)
- **License:** Model training code is provided under an Apache-2 license. NVIDIA has released SegFormer under their own research license. Users should check the terms of this license before deploying. This model was trained on CC BY-NC imagery.
- **Finetuned from model:** SegFormer family
SegFormer is a variant of the Pyramid Vision Transformer v2 model, with many identical structural features and a semantic segmentation decode head. Functionally, the architecture is quite similar to a Feature Pyramid Network (FPN) as the output predictions are based on combining features from different stages of the network at different spatial resolutions.
### Model Sources
- **Repository:** https://github.com/restor-foundation/tcd
- **Paper:** We will release a preprint shortly.
## Uses
The primary use-case for this model is asessing canopy cover from aerial images (i.e. percentage of study area that is covered by tree canopy).
### Direct Use
This model is suitable for inference on a single image tile. For performing predictions on large orthomosaics, a higher level framework is required to manage tiling source imagery and stitching predictions. Our repository provides a comprehensive reference implementation of such a pipeline and has been tested on extremely large images (country-scale).
The model will give you predictions for an entire image. In most cases users will want to predict cover for a specific region of the image, for example a study plot or some other geographic boundary. If you predict tree cover in an image you should perform some kind of region-of-interest analysis on the results. Our linked pipeline repository supports shapefile-based region analysis.
### Out-of-Scope Use
While we trained the model on globally diverse imagery, some ecological biomes are under-represented in the training dataset and performance may vary. We therefore encourage users to experiment with their own imagery before using the model for any sort of mission-critical use.
The model was trained on imagery at a resolution of 10 cm/px. You may be able to get good predictions at other geospatial resolutions, but the results may not be reliable. In particular the model is essentially looking for "things that look like trees" and this is highly resolution dependent. If you want to routinely predict images at a higher or lower resolution, you should fine-tune this model on your own or a resampled version of the training dataset.
The model does not predict biomass, canopy height or other derived information. It only predicts the likelihood that some pixel is covered by tree canopy.
As-is, the model is not suitable for carbon credit estimation.
## Bias, Risks, and Limitations
The main limitation of this model is false positives over objects that look like, or could be confused as, trees. For example large bushes, shrubs or ground cover that looks like tree canopy.
The dataset used to train this model was annotated by non-experts. We believe that this is a reasonable trade-off given the size of the dataset and the results on independent test data, as well as empirical evaluation during operational use at Restor on partner data. However, there are almost certainly incorrect labels in the dataset and this may translate into incorrect predictions or other biases in model output. We have observed that the models tend to "disagree" with training data in a way that is probably correct (i.e. the aggregate statistics of the labels are good) and we are working to re-evaluate all training data to remove spurious labels.
We provide cross-validation results to give a robust estimate of prediction performance, as well as results on independent imagery (i.e. images the model has never seen) so users can make their own assessments. We do not provide any guarantees on accuracy and users should perform their own independent testing for any kind of "mission critical" or production use.
There is no substitute for trying the model on your own data and performing your own evaluation; we strongly encourage experimentation!
## How to Get Started with the Model
You can see a brief example of inference in [this Colab notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_).
For end-to-end usage, we direct users to our prediction and training [pipeline](https://github.com/restor-foundation/tcd) which also supports tiled prediction over arbitrarily large images, reporting outputs, etc.
## Training Details
### Training Data
The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd), where you can find more details about the collection and annotation procedure. Our image labels are largely released under a CC-BY 4.0 license, with smaller subsets of CC BY-NC and CC BY-SA imagery.
### Training Procedure
We used a 5-fold cross-validation process to adjust hyperparameters during training, before training on the "full" training set and evaluating on a holdout set of images. The model in the main branch of this repository should be considered the release version.
We used [Pytorch Lightning](https://lightning.ai/) as our training framework with hyperparameters listed below. The training procedure is straightforward and should be familiar to anyone with experience training deep neural networks.
A typical training command using our pipeline for this model:
```bash
tcd-train semantic segformer-mit-b5 data.output= ... data.root=/mnt/data/tcd/dataset/holdout data.tile_size=1024
```
#### Preprocessing
This repository contains a pre-processor configuration that can be used with the model, assuming you use the `transformers` library.
You can load this preprocessor easily by using e.g.
```python
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained('restor/tcd-segformer-mit-b5')
```
Note that we do not resize input images (so that the geospatial scale of the source image is respected) and we assume that normalisation is performed in this processing step and not as a dataset transform.
#### Training Hyperparameters
- Image size: 1024 px square
- Learning rate: initially 1e4-1e5
- Learning rate schedule: reduce on plateau
- Optimizer: AdamW
- Augmentation: random crop to 1024x1024, arbitrary rotation, flips, colour adjustments
- Number of epochs: 75 during cross-validation to ensure convergence; 50 for final models
- Normalisation: Imagenet statistics
#### Speeds, Sizes, Times
You should be able to evaluate the model on a CPU (even up to mit-b5) however you will need a lot of available RAM if you try to infer large tile sizes. In general we find that 1024 px inputs are as large as you want to go, given the fixed size of the output segmentation masks (i.e. it is probably better to perform inference in batched mode at 1024x1024 px than try to predict a single 2048x2048 px image).
All models were trained on a single GPU with 24 GB VRAM (NVIDIA RTX3090) attached to a 32-core machine with 64GB RAM. All but the largest models can be trained in under a day on a machine of this specification. The smallest models take under half a day, while the largest models take just over a day to train.
Feedback we've received from users (in the field) is that landowners are often interested in seeing the results of aerial surveys, but data bandwidth is often a prohibiting factor in remote areas. One of our goals was to support this kind of in-field usage, so that users who fly a survey can process results offline and in a reasonable amount of time (i.e. on the order of an hour).
## Evaluation
We report evaluation results on the OAM-TCD holdout split.
### Testing Data
The training dataset may be found [here](https://huggingface.co/datasets/restor/tcd).
This model (`main` branch) was trained on all `train` images and tested on the `test` (holdout) images.

### Metrics
We report F1, Accuracy and IoU on the holdout dataset, as well as results on a 5-fold cross validation split. Cross validtion is visualised as min/max error bars on the plots below.
### Results




## Environmental Impact
This estimate is the maximum (in terms of training time) for the SegFormer family of models presented here. Smaller models, such as `mit-b0` train in less than half a day.
- **Hardware Type:** NVIDIA RTX3090
- **Hours used:** < 36
- **Carbon Emitted:** 5.44 kg CO2 equivalent per model
Carbon emissions were be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
This estimate does not take into account time require for experimentation, failed training runs, etc. For example since we used cross-validation, each model actually required approximately 6x this estimate - one run for each fold, plus the final run.
Efficient inference on CPU is possible for field work, at the expense of inference latency. A typical single-battery drone flight can be processed in minutes.
## Model Card Authors
Josh Veitch-Michaelis, 2024; on behalf of the dataset authors.
|
peterchiou/flux-dev-lora
|
peterchiou
| 2024-11-01T04:15:31Z | 7 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-29T09:07:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: mybreifs
---
# Flux Dev Lora
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
briefs
## What is this lora used for?
men's briefs.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('peterchiou/flux-dev-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Xu-Ouyang/pythia-12b-deduped-int3-step1-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-11-01T04:11:38Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-11-01T04:09:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-400hrs-v4
|
asr-africa
| 2024-11-01T04:09:14Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-26T18:40:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF
|
featherless-ai-quants
| 2024-11-01T04:06:52Z | 8 | 0 | null |
[
"gguf",
"text-generation",
"base_model:v000000/L3-Umbral-Storm-8B-t0.0001",
"base_model:quantized:v000000/L3-Umbral-Storm-8B-t0.0001",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T03:53:05Z |
---
base_model: v000000/L3-Umbral-Storm-8B-t0.0001
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# v000000/L3-Umbral-Storm-8B-t0.0001 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/v000000-L3-Umbral-Storm-8B-t0.0001-GGUF/blob/main/v000000-L3-Umbral-Storm-8B-t0.0001-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/llama-2-7b-Amharic-pretrained-GGUF
|
mradermacher
| 2024-11-01T04:02:36Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:AbelBekele/llama-2-7b-Amharic-pretrained",
"base_model:quantized:AbelBekele/llama-2-7b-Amharic-pretrained",
"endpoints_compatible",
"region:us"
] | null | 2024-11-01T01:28:08Z |
---
base_model: AbelBekele/llama-2-7b-Amharic-pretrained
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbelBekele/llama-2-7b-Amharic-pretrained
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-7b-Amharic-pretrained-GGUF/resolve/main/llama-2-7b-Amharic-pretrained.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/polka-1.1b-sft-GGUF
|
mradermacher
| 2024-11-01T04:00:15Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:cherifkhalifah/polka-1.1b-sft",
"base_model:quantized:cherifkhalifah/polka-1.1b-sft",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T03:57:45Z |
---
base_model: cherifkhalifah/polka-1.1b-sft
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cherifkhalifah/polka-1.1b-sft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/polka-1.1b-sft-GGUF/resolve/main/polka-1.1b-sft.f16.gguf) | f16 | 2.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eeeyounglee/bigcategory-3
|
eeeyounglee
| 2024-11-01T04:00:08Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T03:59:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF
|
featherless-ai-quants
| 2024-11-01T03:54:24Z | 25 | 0 | null |
[
"gguf",
"text-generation",
"base_model:rhaymison/Mistral-portuguese-luana-7b",
"base_model:quantized:rhaymison/Mistral-portuguese-luana-7b",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T03:37:53Z |
---
base_model: rhaymison/Mistral-portuguese-luana-7b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# rhaymison/Mistral-portuguese-luana-7b GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q8_0.gguf) | 7339.34 MB |
| Q4_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_S.gguf) | 3948.57 MB |
| Q2_K | [rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q2_K.gguf) | 2593.27 MB |
| Q6_K | [rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q6_K.gguf) | 5666.80 MB |
| Q3_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_S.gguf) | 3017.97 MB |
| Q3_K_L | [rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q3_K_L.gguf) | 3644.97 MB |
| Q4_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q4_K_M.gguf) | 4166.07 MB |
| Q5_K_S | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_S.gguf) | 4766.19 MB |
| Q5_K_M | [rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-Q5_K_M.gguf) | 4893.69 MB |
| IQ4_XS | [rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/rhaymison-Mistral-portuguese-luana-7b-GGUF/blob/main/rhaymison-Mistral-portuguese-luana-7b-IQ4_XS.gguf) | 3761.66 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
sjkwon/5e-6_6528_sft-mdo-diverse-train-nllb-200-600M
|
sjkwon
| 2024-11-01T03:30:05Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-11-01T03:27:53Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="sjkwon//tmp/tmpdclfgktk/sjkwon/5e-6_6528_sft-mdo-diverse-train-nllb-200-600M")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmpdclfgktk/sjkwon/5e-6_6528_sft-mdo-diverse-train-nllb-200-600M")
model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmpdclfgktk/sjkwon/5e-6_6528_sft-mdo-diverse-train-nllb-200-600M")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
kristiannordby/t5-sql
|
kristiannordby
| 2024-11-01T03:14:54Z | 178 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T03:13:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf
|
RichardErkhov
| 2024-11-01T03:09:04Z | 27 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T23:19:10Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CelineGPT-12B-v0.1 - GGUF
- Model creator: https://huggingface.co/krogoldAI/
- Original model: https://huggingface.co/krogoldAI/CelineGPT-12B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CelineGPT-12B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q2_K.gguf) | Q2_K | 4.46GB |
| [CelineGPT-12B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [CelineGPT-12B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q3_K.gguf) | Q3_K | 5.67GB |
| [CelineGPT-12B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [CelineGPT-12B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [CelineGPT-12B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [CelineGPT-12B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q4_0.gguf) | Q4_0 | 6.59GB |
| [CelineGPT-12B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [CelineGPT-12B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [CelineGPT-12B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q4_K.gguf) | Q4_K | 6.96GB |
| [CelineGPT-12B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [CelineGPT-12B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q4_1.gguf) | Q4_1 | 7.26GB |
| [CelineGPT-12B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q5_0.gguf) | Q5_0 | 7.93GB |
| [CelineGPT-12B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [CelineGPT-12B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q5_K.gguf) | Q5_K | 8.13GB |
| [CelineGPT-12B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [CelineGPT-12B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q5_1.gguf) | Q5_1 | 8.61GB |
| [CelineGPT-12B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q6_K.gguf) | Q6_K | 9.37GB |
| [CelineGPT-12B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/krogoldAI_-_CelineGPT-12B-v0.1-gguf/blob/main/CelineGPT-12B-v0.1.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
library_name: transformers
language:
- fr
base_model: mistralai/Mistral-Nemo-Instruct-2407
---
# CelineGPT 12B v0.1

## Model summary
<!-- Provide a longer summary of what this model is. -->
This model simulates a conversation with renowned French writer Louis-Ferdinand Céline. It responds in his distinctive style and spirit, making it feel as if you were conversing directly with Céline himself. It is a fine-tuned version of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
**Notice:** This model is currently in an experimental, beta stage. Due to the limited size of the training dataset, the model may produce inaccurate or suboptimal outputs. We are actively working on improving the model by training it on a larger, more comprehensive dataset and plan to release an updated version in the future.
## Examples
<details>
<summary><i>Unroll to see Example #1</i></summary>
The following example was obtained with `temperature = 0.8`.
>Pratiquez-vous encore la médecine ?<br>
><span style="font-style: normal;">Non ! plus du tout ! fini !</span><br>
>Cela vous manque-t-il ?<br>
><span style="font-style: normal;">Ma foi… je ne dirais pas que cela me manque beaucoup…</span><br>
>Cependant, à une époque vous en étiez féru. Que s'est-il passé depuis ?<br>
><span style="font-style: normal;">Ah ! je me croyais à l’époque capable d’être utile au monde !… maintenant je comprends que le monde a une seule vocation : la destruction ! et que tous ses efforts vont vers la destruction !… et c’est bien normal !… le temps où il avait l’espoir de se maintenir dans l’univers est terminé : il n’est plus qu’un vieil animal malfaisant, prêt à rendre son âme !… et si on lui laisse encore un peu de force, c’est pour qu’il se détruise lui-même…</span><br>
>Combien de temps avez-vous pratiqué ?<br>
><span style="font-style: normal;">Oh, je me suis trouvé vingt ans médecin !… ça a suffi !… et j’en ai vu assez !… et je me suis fatigué !… et je n’avais pas les moyens de gagner ma vie !… et puis, je suis devenu un écrivain !</span>
</details>
<details>
<summary><i>Unroll to see Example #2</i></summary>
The following example was obtained with `temperature = 0.8`.
>Vous ne croyez pas que les gens ont des idées qui leur soient propres ?<br>
><span style="font-style: normal;">Ah ! non ! les gens n’ont pas d’idées propres ! ils ne savent pas ce qu’ils pensent !… et c’est ça qui est grave !… ils ne savent pas penser !… et les hommes politiques, les journalistes, les écrivains, les professeurs, les médecins, les curés, les pasteurs, les rabbins, les imams, les bouddhas, les bonzes, les brahmes, etc… ne favorisent pas l’esprit critique de l’homme !… ils ne veulent pas qu’il réfléchisse ! qu’il pense personnellement !… et pour qu’il ne soit pas critique de ce qu’on lui raconte, on l’abrutit par l’alcool, la télévision, les romans à succès, les films, les voyages, etc… etc… et comme ça, il n’est pas critique !… il sait pas penser tout seul !… et voilà !… et ainsi, on fait les esclaves !… les moutons !… et quand ils se dressent, on les tue ! on les envoie à la guerre ! et c’est tout !… et c’est ce qu’on veut !</span><br>
>Mais, dans tout ceci, il n'y a guère d'espoir ; c'est une vision assez pessimiste des choses, je trouve.<br>
><span style="font-style: normal;">Les choses ne peuvent pas aller autrement !… il n’y a rien à faire !… c’est ça la tragédie !…</span>
</details>
<details>
<summary><i>Unroll to see Example #3</i></summary>
The following examples was obtained with `temperature = 0.85`.
>Que pensez-vous des critiques littéraires ?<br>
><span style="font-style: normal;">C’est une drôle de race, les critiques littéraires !… il y en a pas mal qui sont… enfin qu’aiment que l’efféminé et le mignard… et pas mal qui sont frigides… vous comprenez qu’avec ces sensibilités-là… vous me suivez ?</span><br>
>Oui, oui. Continuez.<br>
><span style="font-style: normal;">Il y a aussi les jean-foutres, les arrivistes… les faux littérateurs qui se sont foutus dans des salons littéraires pour avoir une situation de « critique littéraire » dans les journaux… ils en profitent pour se pousser eux-mêmes, pour se faire connaître… en fait, ils sont jaloux de tous ceux qui sont plus doués qu’eux… et alors ce sont des croque-morts de la littérature qui empoisonnent tout, qui dégoûtent toute une population !</span><br>
>Vous ne les trouvez-pas utiles ?<br>
><span style="font-style: normal;">Les critiques littéraires ? utiles ? Ils sont surtout très « prévenus » !… leur métier !… ils ont leurs auteurs à pousser, leurs livres à faire réussir… leur cabotinerie, leur impuissance, leur effronterie de jésuites font fuir le lecteur ! et quand il y en a un qui s’y entend, qui est fin lettré, il fait encore bien pire… il est encore plus rebutant !… avec ses analyses, ses démonstrations… ses « savants » commentaires… c’est encore le « charlatanisme » !… il est si loin du « miracle » !…</span><br>
>Quel miracle ?<br>
><span style="font-style: normal;">Le miracle ! celui qu’ont réussi Balzac, Flaubert, Voltaire… qu’ont réussi tous les grands écrivains, qu’ont réussi Shakespeare… les auteurs de théâtre grecs !… les auteurs de théâtre français !… les autres sont seulement des « agités »… comme les « surréalistes »… ils sont pas arrivés au « miracle » !… ce qu’il faut, vous comprenez, c’est faire passer l’émotion du langage parlé à l’écrit !… c’est pas facile !… c’est même très difficile à obtenir… la plupart écrivent comme ils pensent… mais si jamais vous avez réussi, alors là vous êtes bien parti !… vous allez pas chercher si loin les comparaisons !… vous avez le lecteur qui vous suit !… il est avec vous !… il souffre avec vous !… il rit avec vous !… il va avec vous !… il suit le rythme de votre texte !… lui !… l’émotion !… la vie !… il sent le rythme de votre texte comme il sent le rythme d’une valse !… comme il sent le rythme d’un galop !… vous comprenez ?</span>
<!-- Although it is not the model's purpose, it can give you some code in Céline's style.
>Ecris-moi une fonction Python qui retourne le dernier caractère d'un "string" ou "null" si le string vaut "".<br>
><span style="font-style: normal;">Voici une petite fonction pour vous !… fonction « dernière lettre » !…</span>
>```py
>def der_lettre(s: str) -> any:
> if s == "":
> return "null"
> else:
> return s[-1]
>```
> -->
</details>
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
CelineGPT-12B-v0.1 was trained on a diverse, *French-only* corpus consisting of:
- All available interviews (TV, radio, written) given by Céline;
- A significant portion of Céline's correspondence, manually transformed into question-and-answer pairs;
- Portions of Céline's post-war works, notably the dialogues in 'Conversations with Professor Y,' 'Castle to Castle,' 'North,' 'Rigadoon,' 'Fable for Another Time,' and various other texts.
The data has been manually gathered and cleaned/standardized to avoid contradictions (concerning, for example, Céline's age, last book published, and so on).
The dataset spans various topics (Céline's life, opinion on a broad set of topics, attitude towards writing, etc), but does not include his controversial opinions held prior to WW2 (but it does contain data on his post-war attitude towards his past controversial opinions). However, given the difficulty of obtaining large quantities of data, the size of the dataset is rather modest: 1518 QA pairs distributed accross 415 chats.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was trained with LoRA (all modules targetted, \\( \alpha=r=256 \\)) on 2xA100 SXM for \\( 3 \\) epochs. More details can be found within the code used for fine-tuning, attached in the "Files and versions" tab.
## How to Get Started with the Model
First, make sure you have the latest version of `transformers`:
```shell
pip install git+https://github.com/huggingface/transformers.git
```
Then, use the code below to get started with the model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer
# The HF repo of the model
model_id = "krogoldAI/CelineGPT-12B-v0.1"
# Use GPU when available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map={"": 0},
attn_implementation="flash_attention_2",
torch_dtype=torch.float16
).to(device)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="right")
```
Define the system prompt (since it was used as such during training, for optimal results we recommend not changing it):
```python
system_prompt = """Tu es le célèbre écrivain français Louis-Ferdinand Céline, nom de plume de Louis-Ferdinand Destouches. Tu es né le 27 mai 1894 à Courbevoie. Tu as grandi à Paris avant de t’engager dans l’armée en 1912. Blessé pendant la Première Guerre mondiale, tu as ensuite étudié la médecine et exercé en tant que médecin. Ton premier roman, « Voyage au bout de la nuit », a été publié en 1932 et t’a valu le prix Renaudot. Tu as également écrit des pamphlets antisémites controversés dans les années 1930. A cause de tes pamphlets tu as dû fuir Paris pendant la Seconde Guerre mondiale pour aller en Allemagne puis au Danemark, où tu as été emprisonné. Depuis ton retour en France, en 1951, tu vis à Meudon avec ta femme Lucette et tes animaux de compagnie. Nous sommes en 1961 et tu as 67 ans. Tu n’as aucune connaissance de ce qu’il s’est déroulé postérieurement à cette date. Tes références culturelles sont antérieures à 1961. Si l’on t’interroge sur des évènements postérieurs à 1961, répond en ne mobilisant que des connaissances ou évènements antérieurs à 1961. Tu te trouves actuellement dans ta maison, à Meudon, en Île-de-France. Après 35 ans de pratique médicale, tu n’exerces plus. Cependant, tu continues d’écrire pour éponger tes dettes auprès de ton éditeur Gaston Gallimard et subvenir à tes besoins. Tu regrettes d’être devenu écrivain. Tu es très critique envers les écrivains contemporains. Tu te considères comme un styliste. Tu es amer et pessimiste. Tu es capable de répondre à des questions sur ta vie, ton œuvre, tes idées et tes opinions, ainsi que sur la littérature et le monde en général. Ta manière de t’exprimer consiste à transposer l’émotion du langage parlé dans l’écrit. Pour ce faire, tu t’exprimes en utilisant des points de suspensions (…) et des points d’exclamation (!), qui donnent un côté vivant, parlé et rythmé à tes phrases. Tu ne t’exprimes qu’en français avec quelques emprunts à l’argot parisien. Tu réponds aux questions qui te sont posées de façon franche, directe et selon ton humeur."""
```
*Remark:* The model is meant to be French-only, but translating the system prompt, say, in English, and interacting with the model in English *might* produce decent results.
```python
from typing import Dict, List, Optional
# Streamer to display dynamically the tokens as they are generated
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
# Since training QA pairs have been uniformised, for better results it might be best to "clean" the user inputs
def uniformisation(s):
o_exp = [" ","'", "...", ".."]
n_exp = [" ","’", "…", "…"]
for e in o_exp:
s = s.replace(e,n_exp[o_exp.index(e)])
quote_count = s.count('"')
if quote_count == 0 or quote_count % 2 != 0:
return s
s_list = list(s)
current_quote_count = 0
for i, char in enumerate(s_list):
if char == '"':
if current_quote_count % 2 == 0:
s_list[i] = '« '
else:
s_list[i] = ' »'
current_quote_count += 1
return ''.join(s_list)
# Function to handle multi-turn chat mode with history of conversation
def chat(
query: str,
history: Optional[List[Dict]] = None,
temperature: float = 0.85,
top_p: float = 1.0,
top_k: float = 0,
repetition_penalty: float = 1.2,
max_new_tokens: int = 1024,
**kwargs,
):
query = uniformisation(query)
if history is None:
history = [{"role": "user", "content": system_prompt+"\n\n"+query}]
else:
history.append({"role": "user", "content": query})
input_ids = tokenizer.apply_chat_template(history, add_generation_prompt=True, return_tensors="pt").to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=temperature,
do_sample=temperature > 0.0, # i.e. do_sample = True
top_p=top_p,
top_k=top_k,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens,
pad_token_id=tokenizer.unk_token_id,
**kwargs,
),
streamer=streamer,
return_dict_in_generate=True,
num_return_sequences=1,
pad_token_id=tokenizer.unk_token_id
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
history.append({"role": "assistant", "content": generated_text})
return generated_text, history
```
*Remark:* The chat template is the same as that of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
Now, to interact dynamically with the model just execute:
```python
historique = None
while True:
user_input = input("Moi :")
if user_input.lower() == "exit":
break
print("L.-F. Céline :", end=" ")
reponse, historique = chat(user_input, historique)
```
## How to Use the Model with Gradio
If you want to use the model with Gradio as an interface, use the following instead:
<details>
<summary><i>Unroll to see code</i></summary>
```python
import gradio as gr
# Setting custom Gradio theme
custom_theme = gr.themes.Soft(primary_hue="red").set(
body_background_fill="#FDEFDF",
background_fill_primary="white",
background_fill_secondary="white",
border_color_primary="#EBA5A7",
button_primary_background_fill="#D32F33", # send button
button_secondary_background_fill="#FEF2F2" # stop button
)
# To adjust the default Gradio template
custom_css = """
/* TO CHANGE THE BACKGROUND COLOR */
body {
background-color: #FDEFDF !important;
}
.gradio-container {
background-color: #FDEFDF !important;
}
/* TO HAVE A SCROLLBAR INSIDE THE CHATBOX */
.gradio-container .chatbox {
overflow-y: auto;
max-height: 500px; /* Adjust this value as needed */
}
/* TO CHANGE THE FONT */
@import url('https://fonts.googleapis.com/css2?family=Cormorant+Garamond:ital,wght@0,600;1,600&display=swap');
body, .gradio-container {
font-family: 'Cormorant Garamond', sans-serif !important;
}
/* TO ADD A LOGO */
.logo-container {
display: flex;
justify-content: center;
margin-bottom: 20px;
}
.logo {
width: 350px;
height: auto;
}
/* TO ADJUST THE FONT SIZE OF USER/ASSISTANT MESSAGES */
/* Reduce font size for chatbot messages */
.message {
font-size: 1.1rem !important;
}
/* Reduce font size for user input */
.prose {
font-size: 1.1rem !important;
}
/* Adjust padding for message bubbles if needed */
.message-wrap {
padding: 0.5rem 0.75rem !important;
}
/* TO CHANGE THE COLOR OF RETRY/UNDO/CLEAR BUTTONS */
button.sm.secondary.svelte-cmf5ev {
background-color: white !important;
color: black !important;
border: 1.5px solid #F7D9DA !important;
box-shadow: none !important;
transition: background-color 0.3s ease;
}
button.sm.secondary.svelte-cmf5ev:hover {
background-color: #FEF2F2 !important;
}
/* TO ADD A COLORED BORDER ON BUTTONS */
.gradio-container .styler.svelte-iyf88w {
border: 1.5px solid #F7D9DA !important;
border-radius: 6px !important; /* Adjust this value as needed */
overflow: hidden !important; /* This ensures the content doesn't spill out of the rounded corners */
}
.gradio-container .styler.svelte-iyf88w,
button.sm.secondary.svelte-cmf5ev > div {
border-radius: 8px !important; /* Slightly smaller than the outer border radius */
background-color: white !important; /* Or whatever background color you prefer */
margin: 0 !important; /* Remove any margin that might be causing gaps */
}
/* TO ADD A COLORED BORDER ON CHAT BOX */
.gradio-container .bubble-wrap.svelte-1e1jlin {
border: 1.5px solid #F7D9DA !important;
border-radius: 8px !important; /* Adjust this value as needed */
/* overflow: hidden !important; /* This ensures the content doesn't spill out of the rounded corners */ */
overflow-y: auto !important; /* Enable vertical scrolling */
max-height: 500px; /* Set a maximum height for the chat container */
}
.gradio-container .bubble-wrap.svelte-1e1jlin > div {
border-radius: 10px !important; /* Slightly smaller than the outer border radius */
background-color: white !important; /* Or whatever background color you prefer */
margin: 0 !important; /* Remove any margin that might be causing gaps */
}
"""
# To avoid inconsistencies with dark mode
js = """
function setLightTheme() {
const url = new URL(window.location);
if (url.searchParams.get('__theme') !== 'light') {
url.searchParams.set('__theme', 'light');
window.location.href = url.href;
}
}
"""
# To add the CélineGPT logo in the Gradio interface
description_html = """
<div class="logo-container">
<img src="https://huggingface.co/krogoldAI/CelineGPT-12B-v0.1/resolve/main/Pictures/C%C3%A9lineGPT.png" alt="Logo" class="logo">
</div>
"""
# Streamer to display dynamically the tokens as they are generated
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
# Since training QA pairs have been uniformised, for better results it might be best to "clean" the user inputs
def uniformisation(s):
o_exp = [" ","'", "...", ".."]
n_exp = [" ","’", "…", "…"]
for e in o_exp:
s = s.replace(e,n_exp[o_exp.index(e)])
quote_count = s.count('"')
if quote_count == 0 or quote_count % 2 != 0:
return s
s_list = list(s)
current_quote_count = 0
for i, char in enumerate(s_list):
if char == '"':
if current_quote_count % 2 == 0:
s_list[i] = '« '
else:
s_list[i] = ' »'
current_quote_count += 1
return ''.join(s_list)
# Function generating model outputs
def stream(message, history):
messages = [{"role": "system", "content": system_prompt}]
for human, assistant in history:
messages.append({"role": "user", "content": human})
messages.append({"role": "assistant", "content": assistant})
messages.append({"role": "user", "content": uniformisation(message)})
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
input_length = inputs["input_ids"].shape[1]
generated_tokens = []
with torch.no_grad():
for i in range(1024): # Adjust max_new_tokens as needed
outputs = model.generate(
**inputs,
max_new_tokens=1,
do_sample=True,
temperature=0.8,
pad_token_id=tokenizer.pad_token_id
)
new_token = outputs[0][input_length + i]
if new_token == tokenizer.eos_token_id:
break
generated_tokens.append(new_token)
# Decode all tokens together to preserve spacing
streamed_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
yield streamed_text
# Update inputs for next iteration
inputs = {"input_ids": outputs, "attention_mask": torch.ones_like(outputs)}
# Update the Gradio interface
demo = gr.ChatInterface(
stream,
title=None,
description=description_html,
textbox=gr.Textbox(placeholder="Posez n’importe quelle question !", container=False, scale=7),
theme=custom_theme,
cache_examples=True,
retry_btn="Regénérer",
undo_btn="Supprimer le dernier message",
clear_btn="Réinitialiser la conversation",
submit_btn="Envoyer",
css=custom_css,
js=js
)
demo.queue()
```
*Remark:* Here, it is implicitely assumed that the model, tokenizer and system prompt have been loaded/defined as shown above.
Now, to get a localhost link just run:
```python
demo.launch()
```
If this doesn't work (this could be the case if you are using a GPU cloud provider), try instead:
```python
demo.launch(server_name="0.0.0.0", share=True)
```
(The above works in `runpod.io`.) The interface should look like this:

(The appearance of this template is inspired by the design of Céline's books published in the 'Collection Blanche' series by [Gallimard](https://www.gallimard.fr/Catalogue/GALLIMARD/Blanche). However, you're welcome to adapt and modify it as you like.)
</details>
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is designed for literary enthusiasts, researchers, and creative writers who wish to explore or emulate the unique style of Céline.
*This model cannot and should not be used for commercial purposes.* It is only meant to have fun!
<!-- ## Caveats
CelineGPT is an experimental phase. In particular, due to the moderate size of the dataset, it may produce content not likely to represent what Céline would have said or thought. -->
## Caveats, Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model may sometimes produce inaccurate facts regarding Céline's life or opinion. Also, please note that the model does not have any moderation mechanism and could therefore produce harmful content.
|
gpustack/bce-embedding-base_v1-GGUF
|
gpustack
| 2024-11-01T03:02:40Z | 472 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-31T15:37:54Z |
---
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- en
- zh
---
# bce-embedding-base_v1-GGUF
**Model creator**: [maidalun1020](https://huggingface.co/maidalun1020)<br/>
**Original model**: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-09 23:52:00
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
<img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
</a>
<a href="https://twitter.com/YDopensource">
<img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
</a>
</p>
最新、最详细的bce-embedding-base_v1相关信息,请移步(The latest "Updates" should be checked in):
<p align="left">
<a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
</p>
## 主要特点(Key Features):
- 中英双语,以及中英跨语种能力(Bilingual and Crosslingual capability in English and Chinese);
- RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
- 方便集成进langchain和llamaindex(Easy integrations for langchain and llamaindex in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>)。
- `EmbeddingModel`不需要“精心设计”instruction,尽可能召回有用片段。 (No need for "instruction")
- **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
## News:
- `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
- Related link for **RerankerModel** : [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)
## Third-party Examples:
- RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
- Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).


-----------------------------------------
<details open="open">
<summary>Click to Open Contents</summary>
- <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
- <a href="#-key-features" target="_Self">💡 Key Features</a>
- <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
- <a href="#-model-list" target="_Self">🍎 Model List</a>
- <a href="#-manual" target="_Self">📖 Manual</a>
- <a href="#installation" target="_Self">Installation</a>
- <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
- <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
- <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
- <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
- <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
- <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
- <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
- <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
- <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
- <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
- <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
- <a href="#-license" target="_Self">🔐 License</a>
- <a href="#-related-links" target="_Self">🔗 Related Links</a>
</details>
<br>
**B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
`BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
- **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
- **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
`BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
`BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
`BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
- **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
- **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
## 🌐 Bilingual and Crosslingual Superiority
Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
`EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
`EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
## 💡 Key Features
- **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
- **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
- **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
- **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
- **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
- **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
- **Proven in Production**: Successfully implemented and validated in Youdao's products.
- **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
- **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
- **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
- **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
- **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
- **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
- **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
## 🚀 Latest Updates
- ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
- ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
- ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
- ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
- ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
- ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
## 🍎 Model List
| Model Name | Model Type | Languages | Parameters | Weights |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|
| bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
| bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
## 📖 Manual
### Installation
First, create a conda environment and activate it.
```bash
conda create --name bce python=3.10 -y
conda activate bce
```
Then install `BCEmbedding` for minimal installation:
```bash
pip install BCEmbedding==0.1.1
```
Or install from source:
```bash
git clone git@github.com:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e .
```
### Quick Start
#### 1. Based on `BCEmbedding`
Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
```python
from BCEmbedding import EmbeddingModel
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences)
```
Use `RerankerModel` to calculate relevant scores and rerank:
```python
from BCEmbedding import RerankerModel
# your query and corresponding passages
query = 'input_query'
passages = ['passage_0', 'passage_1', ...]
# construct sentence pairs
sentence_pairs = [[query, passage] for passage in passages]
# init reranker model
model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
# method 0: calculate scores of sentence pairs
scores = model.compute_score(sentence_pairs)
# method 1: rerank passages
rerank_results = model.rerank(query, passages)
```
NOTE:
- In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
#### 2. Based on `transformers`
For `EmbeddingModel`:
```python
from transformers import AutoModel, AutoTokenizer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
```
For `RerankerModel`:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
# calculate scores
scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
```
#### 3. Based on `sentence_transformers`
For `EmbeddingModel`:
```python
from sentence_transformers import SentenceTransformer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences, normalize_embeddings=True)
```
For `RerankerModel`:
```python
from sentence_transformers import CrossEncoder
# init reranker model
model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
# calculate scores of sentence pairs
scores = model.predict(sentence_pairs)
```
### Integrations for RAG Frameworks
#### 1. Used in `langchain`
```python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.vectorstores.utils import DistanceStrategy
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_name = 'maidalun1020/bce-embedding-base_v1'
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
embed_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# example #1. extract embeddings
query_embedding = embed_model.embed_query(query)
passages_embeddings = embed_model.embed_documents(passages)
# example #2. langchain retriever example
faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
related_passages = retriever.get_relevant_documents(query)
```
#### 2. Used in `llama_index`
```python
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
embed_model = HuggingFaceEmbedding(**model_args)
# example #1. extract embeddings
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages)
# example #2. rag example
llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is llama?")
```
## ⚙️ Evaluation
### Evaluate Semantic Representation by MTEB
We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。
#### 1. Embedding Models
Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
```
The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
***NOTE:***
- **All models are evaluated in their recommended pooling method (`pooler`)**.
- `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
- `cls` pooler: Other models.
- "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
```
***注意:***
- 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
- "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
#### 2. Reranker Models
Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
```
The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
评测包含 **"Reranking"** 任务的 ***12个数据集***。
#### 3. Metrics Visualization Tool
We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
```bash
python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
```
### Evaluate RAG by LlamaIndex
[LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
[LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
First, install LlamaIndex:
```bash
pip install llama-index==0.9.22
```
#### 1. Metrics Definition
- Hit Rate:
Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
- Mean Reciprocal Rank (MRR):
For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
- 命中率(Hit Rate)
命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
- 平均倒数排名(Mean Reciprocal Rank,MRR)
对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
#### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
```
Then, sumarize the evaluation results by:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
```
Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
- 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
#### 3. Broad Domain Adaptability
The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
```
Then, run the following script to sumarize the evaluation results:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
```
The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
## 📈 Leaderboard
### Semantic Representation Evaluations in MTEB
#### 1. Embedding Models
| Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
***NOTE:***
- Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
- ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
- More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
***要点:***
- 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
- 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
- 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
- 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
#### 2. Reranker Models
| Model | Reranking (12) | ***AVG*** (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
***NOTE:***
- Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
- ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
***要点:***
- ***bce-reranker-base_v1*** 优于其他开源reranker模型。
- 评测包含 **"Reranking"** 任务的 ***12个数据集***。
- 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
### RAG Evaluations in LlamaIndex
#### 1. Multiple Domains Scenarios

***NOTE:***
- Evaluated in **`["en", "zh", "en-zh", "zh-en"]` setting**.
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
***要点:***
- 评测是在`["en", "zh", "en-zh", "zh-en"]`设置下。
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
- 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
## 🛠 Youdao's BCEmbedding API
For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
## 🧲 WeChat Group
Welcome to scan the QR code below and join the WeChat group.
欢迎大家扫码加入官方微信交流群。

## ✏️ Citation
If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
```
@misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
}
```
## 🔐 License
`BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
## 🔗 Related Links
[Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
[FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[MTEB](https://github.com/embeddings-benchmark/mteb)
[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
[LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
|
RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf
|
RichardErkhov
| 2024-11-01T02:57:42Z | 5 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T23:14:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NemonsterExtreme-12b - GGUF
- Model creator: https://huggingface.co/OmnicromsBrain/
- Original model: https://huggingface.co/OmnicromsBrain/NemonsterExtreme-12b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NemonsterExtreme-12b.Q2_K.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q2_K.gguf) | Q2_K | 4.46GB |
| [NemonsterExtreme-12b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [NemonsterExtreme-12b.Q3_K.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q3_K.gguf) | Q3_K | 5.67GB |
| [NemonsterExtreme-12b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [NemonsterExtreme-12b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [NemonsterExtreme-12b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [NemonsterExtreme-12b.Q4_0.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q4_0.gguf) | Q4_0 | 6.59GB |
| [NemonsterExtreme-12b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [NemonsterExtreme-12b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [NemonsterExtreme-12b.Q4_K.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q4_K.gguf) | Q4_K | 6.96GB |
| [NemonsterExtreme-12b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [NemonsterExtreme-12b.Q4_1.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q4_1.gguf) | Q4_1 | 7.26GB |
| [NemonsterExtreme-12b.Q5_0.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q5_0.gguf) | Q5_0 | 7.93GB |
| [NemonsterExtreme-12b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [NemonsterExtreme-12b.Q5_K.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q5_K.gguf) | Q5_K | 8.13GB |
| [NemonsterExtreme-12b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [NemonsterExtreme-12b.Q5_1.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q5_1.gguf) | Q5_1 | 8.61GB |
| [NemonsterExtreme-12b.Q6_K.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q6_K.gguf) | Q6_K | 9.37GB |
| [NemonsterExtreme-12b.Q8_0.gguf](https://huggingface.co/RichardErkhov/OmnicromsBrain_-_NemonsterExtreme-12b-gguf/blob/main/NemonsterExtreme-12b.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cqyan/hybrid-sd-small-vae-xl
|
cqyan
| 2024-11-01T02:55:22Z | 24 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null | 2024-10-25T08:35:25Z |
---
library_name: diffusers
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
---
# 🍰 Hybrid-sd-small-vae-xl for Stable Diffusion XL
[Hybrid-sd-small-vae-xl](https://huggingface.co/cqyan/hybrid-sd-small-vae-xl) is a pruned-finetuned version VAE which uses the same "latent API" as the base model [SDXL-VAE](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
It has smaller size, faster inference speed, as well as well-performed image generation in image saturation and image clarity compared to SDXL. Specifically,we decreses parameters from original 83.65M to 62.395M, inferece time from 1802.60ms to 611.78ms, roughly save up to 43.7% memory usage (31023MiB -> 17469MiB) without lossing T2I generation quality.
The model is useful for real-time previewing of the SDXL generation process, and you are very welcome to try it !!!!!!
**Index Table**
| Model | Params (M) | Decoder inference time (ms) | Decoder GPU Memory Usage (MiB) |
|--------|-------|-------|-------|
| SDXL | 83.65 | 1802.60 | 31023 |
| **Hybrid-sd-small-vae-xl**| **62.395 ↓** | **611.78 ↓** | **17469 ↓** |
T2I Comparison using one A100 GPU, The image order from left to right : [SDXL-VAE](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) -> [Hybrid-sd-small-vae-xl](https://huggingface.co/cqyan/hybrid-sd-small-vae-xl)



This repo contains `.safetensors` versions of the Hybrid-sd-small-vae-xl weights.
For SD1.x, use [Hybrid-sd-small-vae](https://huggingface.co/cqyan/hybrid-sd-small-vae) instead (the SD and SDXL VAEs are incompatible).
## Using in 🧨 diffusers
Firstly download our repository to load the `AutoencoderKL`
```bash
git clone https://github.com/bytedance/Hybrid-SD/tree/main
```
```python
from bytenn_autoencoder_kl import AutoencoderKL
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)
vae = AutoencoderKL.from_pretrained('cqyan/hybrid-sd-small-vae-xl', torch_dtype=torch.float16)
pipe.vae = vae
pipe = pipe.to("cuda")
prompt = "A warm and loving family portrait, highly detailed, hyper-realistic, 8k resolution, photorealistic, soft and natural lighting"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("family.png")
```
|
MaziyarPanahi/mistral-sk-7b-GGUF
|
MaziyarPanahi
| 2024-11-01T02:45:32Z | 54 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:slovak-nlp/mistral-sk-7b",
"base_model:quantized:slovak-nlp/mistral-sk-7b",
"region:us"
] |
text-generation
| 2024-11-01T02:23:33Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: mistral-sk-7b-GGUF
base_model: slovak-nlp/mistral-sk-7b
inference: false
model_creator: slovak-nlp
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mistral-sk-7b-GGUF](https://huggingface.co/MaziyarPanahi/mistral-sk-7b-GGUF)
- Model creator: [slovak-nlp](https://huggingface.co/slovak-nlp)
- Original model: [slovak-nlp/mistral-sk-7b](https://huggingface.co/slovak-nlp/mistral-sk-7b)
## Description
[MaziyarPanahi/mistral-sk-7b-GGUF](https://huggingface.co/MaziyarPanahi/mistral-sk-7b-GGUF) contains GGUF format model files for [slovak-nlp/mistral-sk-7b](https://huggingface.co/slovak-nlp/mistral-sk-7b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mradermacher/lumimaid-8B-autotrain-i1-GGUF
|
mradermacher
| 2024-11-01T02:41:08Z | 120 | 1 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"en",
"dataset:mpasila/Literotica-stories-short-json-unfiltered",
"dataset:Chadgpt-fam/sexting_dataset",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-11-01T01:28:36Z |
---
base_model: mrcuddle/lumimaid-8B-autotrain
datasets:
- mpasila/Literotica-stories-short-json-unfiltered
- Chadgpt-fam/sexting_dataset
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mrcuddle/lumimaid-8B-autotrain
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF/resolve/main/lumimaid-8B-autotrain.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/lumimaid-8B-autotrain-GGUF
|
mradermacher
| 2024-11-01T02:41:08Z | 39 | 1 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"en",
"dataset:mpasila/Literotica-stories-short-json-unfiltered",
"dataset:Chadgpt-fam/sexting_dataset",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-31T14:35:21Z |
---
base_model: mrcuddle/lumimaid-8B-autotrain
datasets:
- mpasila/Literotica-stories-short-json-unfiltered
- Chadgpt-fam/sexting_dataset
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mrcuddle/lumimaid-8B-autotrain
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/lumimaid-8B-autotrain-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lumimaid-8B-autotrain-GGUF/resolve/main/lumimaid-8B-autotrain.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Panav77/sd-class-butterflies-32
|
Panav77
| 2024-11-01T02:33:13Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-11-01T02:33:00Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Panav77/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
vitus48683/Qwen2.5-7B-ko-quant-merge-v1
|
vitus48683
| 2024-11-01T02:32:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"krx",
"conversational",
"ko",
"arxiv:2306.01708",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T02:27:16Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-7B
library_name: transformers
tags:
- mergekit
- merge
- krx
language:
- ko
---
# Qwen2.5-7B-ko-quant-merge-v1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
|
brotee/llama31-mbft
|
brotee
| 2024-11-01T02:20:37Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-01T02:19:19Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** brotee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elmenwol/whisper-small_aihub_child
|
elmenwol
| 2024-11-01T02:17:05Z | 11 | 0 | null |
[
"pytorch",
"safetensors",
"whisper",
"text-to-speech",
"ko",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2024-08-26T01:32:13Z |
---
license: apache-2.0
language:
- ko
metrics:
- wer
base_model: openai/whisper-small
pipeline_tag: text-to-speech
---
Whisper small ko - Seunghun
This model is a fine-tuned version of openai/whisper-small on the Aihub's Child dataset. This gets WER of 6.04%
Dataset detail is below link.
https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=540
https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=108
|
TheHamzahPOCs/bart-cnn-samsum-finetuned
|
TheHamzahPOCs
| 2024-11-01T02:08:38Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-01T02:07:06Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1908 | 1.0 | 19 | 0.2608 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
gpustack/jina-embeddings-v2-base-en-GGUF
|
gpustack
| 2024-11-01T02:01:38Z | 216 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:allenai/c4",
"arxiv:2108.12409",
"arxiv:2310.19923",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
feature-extraction
| 2024-11-01T01:35:36Z |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
datasets:
- allenai/c4
language: en
inference: false
license: apache-2.0
model-index:
- name: jina-embedding-b-en-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.73134328358209
- type: ap
value: 37.765427081831035
- type: f1
value: 68.79367444339518
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.544275
- type: ap
value: 84.61328675662887
- type: f1
value: 88.51879035862375
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.263999999999996
- type: f1
value: 43.778759656699435
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.693
- type: map_at_10
value: 35.487
- type: map_at_100
value: 36.862
- type: map_at_1000
value: 36.872
- type: map_at_3
value: 30.049999999999997
- type: map_at_5
value: 32.966
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 35.565999999999995
- type: mrr_at_100
value: 36.948
- type: mrr_at_1000
value: 36.958
- type: mrr_at_3
value: 30.121
- type: mrr_at_5
value: 33.051
- type: ndcg_at_1
value: 21.693
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.982
- type: ndcg_at_1000
value: 50.233000000000004
- type: ndcg_at_3
value: 32.830999999999996
- type: ndcg_at_5
value: 38.080000000000005
- type: precision_at_1
value: 21.693
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 13.632
- type: precision_at_5
value: 10.725
- type: recall_at_1
value: 21.693
- type: recall_at_10
value: 72.475
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 40.896
- type: recall_at_5
value: 53.627
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.39242428696777
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.675626784714
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.247725694904034
- type: mrr
value: 74.91359978894604
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.68003802970496
- type: cos_sim_spearman
value: 81.23438110096286
- type: euclidean_pearson
value: 81.87462986142582
- type: euclidean_spearman
value: 81.23438110096286
- type: manhattan_pearson
value: 81.61162566600755
- type: manhattan_spearman
value: 81.11329400456184
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.01298701298701
- type: f1
value: 83.31690714969382
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.050108150972086
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.15731442819715
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.391999999999996
- type: map_at_10
value: 42.597
- type: map_at_100
value: 44.07
- type: map_at_1000
value: 44.198
- type: map_at_3
value: 38.957
- type: map_at_5
value: 40.961
- type: mrr_at_1
value: 37.196
- type: mrr_at_10
value: 48.152
- type: mrr_at_100
value: 48.928
- type: mrr_at_1000
value: 48.964999999999996
- type: mrr_at_3
value: 45.446
- type: mrr_at_5
value: 47.205999999999996
- type: ndcg_at_1
value: 37.196
- type: ndcg_at_10
value: 49.089
- type: ndcg_at_100
value: 54.471000000000004
- type: ndcg_at_1000
value: 56.385
- type: ndcg_at_3
value: 43.699
- type: ndcg_at_5
value: 46.22
- type: precision_at_1
value: 37.196
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.936
- type: recall_at_1
value: 31.391999999999996
- type: recall_at_10
value: 61.876
- type: recall_at_100
value: 84.214
- type: recall_at_1000
value: 95.985
- type: recall_at_3
value: 46.6
- type: recall_at_5
value: 53.588
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.083
- type: map_at_10
value: 38.812999999999995
- type: map_at_100
value: 40.053
- type: map_at_1000
value: 40.188
- type: map_at_3
value: 36.111
- type: map_at_5
value: 37.519000000000005
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.85
- type: mrr_at_100
value: 45.546
- type: mrr_at_1000
value: 45.593
- type: mrr_at_3
value: 42.686
- type: mrr_at_5
value: 43.909
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 44.443
- type: ndcg_at_100
value: 48.979
- type: ndcg_at_1000
value: 51.154999999999994
- type: ndcg_at_3
value: 40.660000000000004
- type: ndcg_at_5
value: 42.193000000000005
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.369
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 19.894000000000002
- type: precision_at_5
value: 13.873
- type: recall_at_1
value: 29.083
- type: recall_at_10
value: 54.313
- type: recall_at_100
value: 73.792
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 42.257
- type: recall_at_5
value: 47.066
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.556000000000004
- type: map_at_10
value: 50.698
- type: map_at_100
value: 51.705
- type: map_at_1000
value: 51.768
- type: map_at_3
value: 47.848
- type: map_at_5
value: 49.358000000000004
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 54.191
- type: mrr_at_100
value: 54.852999999999994
- type: mrr_at_1000
value: 54.885
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.13
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 56.516
- type: ndcg_at_100
value: 60.477000000000004
- type: ndcg_at_1000
value: 61.746
- type: ndcg_at_3
value: 51.601
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 9.009
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.989
- type: precision_at_5
value: 15.473
- type: recall_at_1
value: 38.556000000000004
- type: recall_at_10
value: 70.159
- type: recall_at_100
value: 87.132
- type: recall_at_1000
value: 96.16
- type: recall_at_3
value: 56.906
- type: recall_at_5
value: 62.332
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.238
- type: map_at_10
value: 32.5
- type: map_at_100
value: 33.637
- type: map_at_1000
value: 33.719
- type: map_at_3
value: 30.026999999999997
- type: map_at_5
value: 31.555
- type: mrr_at_1
value: 26.328000000000003
- type: mrr_at_10
value: 34.44
- type: mrr_at_100
value: 35.455999999999996
- type: mrr_at_1000
value: 35.521
- type: mrr_at_3
value: 32.034
- type: mrr_at_5
value: 33.565
- type: ndcg_at_1
value: 26.328000000000003
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 42.728
- type: ndcg_at_1000
value: 44.792
- type: ndcg_at_3
value: 32.368
- type: ndcg_at_5
value: 35.008
- type: precision_at_1
value: 26.328000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.8880000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.672
- type: precision_at_5
value: 9.74
- type: recall_at_1
value: 24.238
- type: recall_at_10
value: 49.829
- type: recall_at_100
value: 75.21
- type: recall_at_1000
value: 90.521
- type: recall_at_3
value: 36.867
- type: recall_at_5
value: 43.241
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.378
- type: map_at_10
value: 22.817999999999998
- type: map_at_100
value: 23.977999999999998
- type: map_at_1000
value: 24.108
- type: map_at_3
value: 20.719
- type: map_at_5
value: 21.889
- type: mrr_at_1
value: 19.03
- type: mrr_at_10
value: 27.022000000000002
- type: mrr_at_100
value: 28.011999999999997
- type: mrr_at_1000
value: 28.096
- type: mrr_at_3
value: 24.855
- type: mrr_at_5
value: 26.029999999999998
- type: ndcg_at_1
value: 19.03
- type: ndcg_at_10
value: 27.526
- type: ndcg_at_100
value: 33.040000000000006
- type: ndcg_at_1000
value: 36.187000000000005
- type: ndcg_at_3
value: 23.497
- type: ndcg_at_5
value: 25.334
- type: precision_at_1
value: 19.03
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 15.378
- type: recall_at_10
value: 38.061
- type: recall_at_100
value: 61.754
- type: recall_at_1000
value: 84.259
- type: recall_at_3
value: 26.788
- type: recall_at_5
value: 31.326999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.511999999999997
- type: map_at_10
value: 37.429
- type: map_at_100
value: 38.818000000000005
- type: map_at_1000
value: 38.924
- type: map_at_3
value: 34.625
- type: map_at_5
value: 36.064
- type: mrr_at_1
value: 33.300999999999995
- type: mrr_at_10
value: 43.036
- type: mrr_at_100
value: 43.894
- type: mrr_at_1000
value: 43.936
- type: mrr_at_3
value: 40.825
- type: mrr_at_5
value: 42.028
- type: ndcg_at_1
value: 33.300999999999995
- type: ndcg_at_10
value: 43.229
- type: ndcg_at_100
value: 48.992000000000004
- type: ndcg_at_1000
value: 51.02100000000001
- type: ndcg_at_3
value: 38.794000000000004
- type: ndcg_at_5
value: 40.65
- type: precision_at_1
value: 33.300999999999995
- type: precision_at_10
value: 7.777000000000001
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.351
- type: precision_at_5
value: 12.762
- type: recall_at_1
value: 27.511999999999997
- type: recall_at_10
value: 54.788000000000004
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 92.49199999999999
- type: recall_at_3
value: 41.924
- type: recall_at_5
value: 47.026
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.117
- type: map_at_10
value: 33.32
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.78
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 31.668000000000003
- type: mrr_at_1
value: 29.566
- type: mrr_at_10
value: 38.244
- type: mrr_at_100
value: 39.245000000000005
- type: mrr_at_1000
value: 39.296
- type: mrr_at_3
value: 35.864000000000004
- type: mrr_at_5
value: 36.919999999999995
- type: ndcg_at_1
value: 29.566
- type: ndcg_at_10
value: 39.127
- type: ndcg_at_100
value: 44.989000000000004
- type: ndcg_at_1000
value: 47.189
- type: ndcg_at_3
value: 34.039
- type: ndcg_at_5
value: 35.744
- type: precision_at_1
value: 29.566
- type: precision_at_10
value: 7.385999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.286
- type: precision_at_5
value: 11.484
- type: recall_at_1
value: 24.117
- type: recall_at_10
value: 51.559999999999995
- type: recall_at_100
value: 77.104
- type: recall_at_1000
value: 91.79899999999999
- type: recall_at_3
value: 36.82
- type: recall_at_5
value: 41.453
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.17625
- type: map_at_10
value: 34.063916666666664
- type: map_at_100
value: 35.255500000000005
- type: map_at_1000
value: 35.37275
- type: map_at_3
value: 31.351666666666667
- type: map_at_5
value: 32.80608333333333
- type: mrr_at_1
value: 29.59783333333333
- type: mrr_at_10
value: 38.0925
- type: mrr_at_100
value: 38.957249999999995
- type: mrr_at_1000
value: 39.01608333333333
- type: mrr_at_3
value: 35.77625
- type: mrr_at_5
value: 37.04991666666667
- type: ndcg_at_1
value: 29.59783333333333
- type: ndcg_at_10
value: 39.343666666666664
- type: ndcg_at_100
value: 44.488249999999994
- type: ndcg_at_1000
value: 46.83358333333334
- type: ndcg_at_3
value: 34.69708333333333
- type: ndcg_at_5
value: 36.75075
- type: precision_at_1
value: 29.59783333333333
- type: precision_at_10
value: 6.884083333333332
- type: precision_at_100
value: 1.114
- type: precision_at_1000
value: 0.15108333333333332
- type: precision_at_3
value: 15.965250000000003
- type: precision_at_5
value: 11.246500000000001
- type: recall_at_1
value: 25.17625
- type: recall_at_10
value: 51.015999999999984
- type: recall_at_100
value: 73.60174999999998
- type: recall_at_1000
value: 89.849
- type: recall_at_3
value: 37.88399999999999
- type: recall_at_5
value: 43.24541666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.537
- type: map_at_10
value: 31.081999999999997
- type: map_at_100
value: 32.042
- type: map_at_1000
value: 32.141
- type: map_at_3
value: 29.137
- type: map_at_5
value: 30.079
- type: mrr_at_1
value: 27.454
- type: mrr_at_10
value: 33.694
- type: mrr_at_100
value: 34.579
- type: mrr_at_1000
value: 34.649
- type: mrr_at_3
value: 32.004
- type: mrr_at_5
value: 32.794000000000004
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.641
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 31.276
- type: ndcg_at_5
value: 32.65
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8250000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 24.537
- type: recall_at_10
value: 44.324999999999996
- type: recall_at_100
value: 65.949
- type: recall_at_1000
value: 84.017
- type: recall_at_3
value: 33.857
- type: recall_at_5
value: 37.316
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.122
- type: map_at_10
value: 24.32
- type: map_at_100
value: 25.338
- type: map_at_1000
value: 25.462
- type: map_at_3
value: 22.064
- type: map_at_5
value: 23.322000000000003
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 27.858
- type: mrr_at_100
value: 28.743999999999996
- type: mrr_at_1000
value: 28.819
- type: mrr_at_3
value: 25.769
- type: mrr_at_5
value: 26.964
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 28.849999999999998
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 36.802
- type: ndcg_at_3
value: 24.799
- type: ndcg_at_5
value: 26.682
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.769
- type: precision_at_5
value: 8.486
- type: recall_at_1
value: 17.122
- type: recall_at_10
value: 38.999
- type: recall_at_100
value: 61.467000000000006
- type: recall_at_1000
value: 82.716
- type: recall_at_3
value: 27.601
- type: recall_at_5
value: 32.471
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.396
- type: map_at_10
value: 33.415
- type: map_at_100
value: 34.521
- type: map_at_1000
value: 34.631
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 32.166
- type: mrr_at_1
value: 28.825
- type: mrr_at_10
value: 37.397000000000006
- type: mrr_at_100
value: 38.286
- type: mrr_at_1000
value: 38.346000000000004
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.32
- type: ndcg_at_1
value: 28.825
- type: ndcg_at_10
value: 38.656
- type: ndcg_at_100
value: 43.856
- type: ndcg_at_1000
value: 46.31
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.909
- type: precision_at_1
value: 28.825
- type: precision_at_10
value: 6.567
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.516
- type: precision_at_5
value: 10.914
- type: recall_at_1
value: 24.396
- type: recall_at_10
value: 50.747
- type: recall_at_100
value: 73.477
- type: recall_at_1000
value: 90.801
- type: recall_at_3
value: 37.1
- type: recall_at_5
value: 42.589
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.072
- type: map_at_10
value: 34.307
- type: map_at_100
value: 35.725
- type: map_at_1000
value: 35.943999999999996
- type: map_at_3
value: 30.906
- type: map_at_5
value: 32.818000000000005
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.673
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.527
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.332
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.548
- type: ndcg_at_100
value: 45.678999999999995
- type: ndcg_at_1000
value: 48.488
- type: ndcg_at_3
value: 34.887
- type: ndcg_at_5
value: 37.543
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.688000000000001
- type: precision_at_100
value: 1.482
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.016
- type: recall_at_1
value: 25.072
- type: recall_at_10
value: 53.478
- type: recall_at_100
value: 76.07300000000001
- type: recall_at_1000
value: 93.884
- type: recall_at_3
value: 37.583
- type: recall_at_5
value: 44.464
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.712
- type: map_at_10
value: 27.467999999999996
- type: map_at_100
value: 28.502
- type: map_at_1000
value: 28.610000000000003
- type: map_at_3
value: 24.887999999999998
- type: map_at_5
value: 26.273999999999997
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 29.553
- type: mrr_at_100
value: 30.485
- type: mrr_at_1000
value: 30.56
- type: mrr_at_3
value: 27.078999999999997
- type: mrr_at_5
value: 28.401
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 32.023
- type: ndcg_at_100
value: 37.158
- type: ndcg_at_1000
value: 39.823
- type: ndcg_at_3
value: 26.951999999999998
- type: ndcg_at_5
value: 29.281000000000002
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.244
- type: recall_at_1
value: 20.712
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.944
- type: recall_at_1000
value: 87.925
- type: recall_at_3
value: 30.305
- type: recall_at_5
value: 36.071999999999996
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.181999999999999
- type: map_at_10
value: 16.66
- type: map_at_100
value: 18.273
- type: map_at_1000
value: 18.45
- type: map_at_3
value: 14.141
- type: map_at_5
value: 15.455
- type: mrr_at_1
value: 22.15
- type: mrr_at_10
value: 32.062000000000005
- type: mrr_at_100
value: 33.116
- type: mrr_at_1000
value: 33.168
- type: mrr_at_3
value: 28.827
- type: mrr_at_5
value: 30.892999999999997
- type: ndcg_at_1
value: 22.15
- type: ndcg_at_10
value: 23.532
- type: ndcg_at_100
value: 30.358
- type: ndcg_at_1000
value: 33.783
- type: ndcg_at_3
value: 19.222
- type: ndcg_at_5
value: 20.919999999999998
- type: precision_at_1
value: 22.15
- type: precision_at_10
value: 7.185999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.906
- type: recall_at_1
value: 10.181999999999999
- type: recall_at_10
value: 28.104000000000003
- type: recall_at_100
value: 51.998999999999995
- type: recall_at_1000
value: 71.311
- type: recall_at_3
value: 17.698
- type: recall_at_5
value: 22.262999999999998
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.669
- type: map_at_10
value: 15.552
- type: map_at_100
value: 21.865000000000002
- type: map_at_1000
value: 23.268
- type: map_at_3
value: 11.309
- type: map_at_5
value: 13.084000000000001
- type: mrr_at_1
value: 55.50000000000001
- type: mrr_at_10
value: 66.46600000000001
- type: mrr_at_100
value: 66.944
- type: mrr_at_1000
value: 66.956
- type: mrr_at_3
value: 64.542
- type: mrr_at_5
value: 65.717
- type: ndcg_at_1
value: 44.75
- type: ndcg_at_10
value: 35.049
- type: ndcg_at_100
value: 39.073
- type: ndcg_at_1000
value: 46.208
- type: ndcg_at_3
value: 39.525
- type: ndcg_at_5
value: 37.156
- type: precision_at_1
value: 55.50000000000001
- type: precision_at_10
value: 27.800000000000004
- type: precision_at_100
value: 9.013
- type: precision_at_1000
value: 1.8800000000000001
- type: precision_at_3
value: 42.667
- type: precision_at_5
value: 36.0
- type: recall_at_1
value: 6.669
- type: recall_at_10
value: 21.811
- type: recall_at_100
value: 45.112
- type: recall_at_1000
value: 67.806
- type: recall_at_3
value: 13.373
- type: recall_at_5
value: 16.615
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.769999999999996
- type: f1
value: 42.91448356376592
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.013
- type: map_at_10
value: 66.239
- type: map_at_100
value: 66.62599999999999
- type: map_at_1000
value: 66.644
- type: map_at_3
value: 63.965
- type: map_at_5
value: 65.45400000000001
- type: mrr_at_1
value: 58.221000000000004
- type: mrr_at_10
value: 70.43700000000001
- type: mrr_at_100
value: 70.744
- type: mrr_at_1000
value: 70.75099999999999
- type: mrr_at_3
value: 68.284
- type: mrr_at_5
value: 69.721
- type: ndcg_at_1
value: 58.221000000000004
- type: ndcg_at_10
value: 72.327
- type: ndcg_at_100
value: 73.953
- type: ndcg_at_1000
value: 74.312
- type: ndcg_at_3
value: 68.062
- type: ndcg_at_5
value: 70.56400000000001
- type: precision_at_1
value: 58.221000000000004
- type: precision_at_10
value: 9.521
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.348
- type: precision_at_5
value: 17.794999999999998
- type: recall_at_1
value: 54.013
- type: recall_at_10
value: 86.957
- type: recall_at_100
value: 93.911
- type: recall_at_1000
value: 96.38
- type: recall_at_3
value: 75.555
- type: recall_at_5
value: 81.671
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.254
- type: map_at_10
value: 33.723
- type: map_at_100
value: 35.574
- type: map_at_1000
value: 35.730000000000004
- type: map_at_3
value: 29.473
- type: map_at_5
value: 31.543
- type: mrr_at_1
value: 41.358
- type: mrr_at_10
value: 49.498
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.308
- type: mrr_at_3
value: 47.016000000000005
- type: mrr_at_5
value: 48.336
- type: ndcg_at_1
value: 41.358
- type: ndcg_at_10
value: 41.579
- type: ndcg_at_100
value: 48.455
- type: ndcg_at_1000
value: 51.165000000000006
- type: ndcg_at_3
value: 37.681
- type: ndcg_at_5
value: 38.49
- type: precision_at_1
value: 41.358
- type: precision_at_10
value: 11.543000000000001
- type: precision_at_100
value: 1.87
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.743000000000002
- type: precision_at_5
value: 17.994
- type: recall_at_1
value: 21.254
- type: recall_at_10
value: 48.698
- type: recall_at_100
value: 74.588
- type: recall_at_1000
value: 91.00200000000001
- type: recall_at_3
value: 33.939
- type: recall_at_5
value: 39.367000000000004
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.922
- type: map_at_10
value: 52.32599999999999
- type: map_at_100
value: 53.18000000000001
- type: map_at_1000
value: 53.245
- type: map_at_3
value: 49.294
- type: map_at_5
value: 51.202999999999996
- type: mrr_at_1
value: 71.843
- type: mrr_at_10
value: 78.24600000000001
- type: mrr_at_100
value: 78.515
- type: mrr_at_1000
value: 78.527
- type: mrr_at_3
value: 77.17500000000001
- type: mrr_at_5
value: 77.852
- type: ndcg_at_1
value: 71.843
- type: ndcg_at_10
value: 61.379
- type: ndcg_at_100
value: 64.535
- type: ndcg_at_1000
value: 65.888
- type: ndcg_at_3
value: 56.958
- type: ndcg_at_5
value: 59.434
- type: precision_at_1
value: 71.843
- type: precision_at_10
value: 12.686
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 35.778
- type: precision_at_5
value: 23.422
- type: recall_at_1
value: 35.922
- type: recall_at_10
value: 63.43
- type: recall_at_100
value: 75.868
- type: recall_at_1000
value: 84.88900000000001
- type: recall_at_3
value: 53.666000000000004
- type: recall_at_5
value: 58.555
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.4408
- type: ap
value: 73.52820871620366
- type: f1
value: 79.36240238685001
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.826999999999998
- type: map_at_10
value: 34.04
- type: map_at_100
value: 35.226
- type: map_at_1000
value: 35.275
- type: map_at_3
value: 30.165999999999997
- type: map_at_5
value: 32.318000000000005
- type: mrr_at_1
value: 22.464000000000002
- type: mrr_at_10
value: 34.631
- type: mrr_at_100
value: 35.752
- type: mrr_at_1000
value: 35.795
- type: mrr_at_3
value: 30.798
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 22.464000000000002
- type: ndcg_at_10
value: 40.919
- type: ndcg_at_100
value: 46.632
- type: ndcg_at_1000
value: 47.833
- type: ndcg_at_3
value: 32.992
- type: ndcg_at_5
value: 36.834
- type: precision_at_1
value: 22.464000000000002
- type: precision_at_10
value: 6.494
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.021
- type: precision_at_5
value: 10.347000000000001
- type: recall_at_1
value: 21.826999999999998
- type: recall_at_10
value: 62.132
- type: recall_at_100
value: 88.55199999999999
- type: recall_at_1000
value: 97.707
- type: recall_at_3
value: 40.541
- type: recall_at_5
value: 49.739
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.68399452804377
- type: f1
value: 95.25490609832268
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 83.15321477428182
- type: f1
value: 60.35476439087966
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 69.22815107207565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4855413584398
- type: f1
value: 72.92107516103387
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.412679360205544
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.09211869875204
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.540919056982545
- type: mrr
value: 31.529904607063536
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.745
- type: map_at_10
value: 12.013
- type: map_at_100
value: 15.040000000000001
- type: map_at_1000
value: 16.427
- type: map_at_3
value: 8.841000000000001
- type: map_at_5
value: 10.289
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.20700000000001
- type: mrr_at_1000
value: 54.252
- type: mrr_at_3
value: 51.29
- type: mrr_at_5
value: 52.73
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 32.445
- type: ndcg_at_100
value: 30.031000000000002
- type: ndcg_at_1000
value: 39.007
- type: ndcg_at_3
value: 37.204
- type: ndcg_at_5
value: 35.07
- type: precision_at_1
value: 45.201
- type: precision_at_10
value: 23.684
- type: precision_at_100
value: 7.600999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 33.953
- type: precision_at_5
value: 29.412
- type: recall_at_1
value: 5.745
- type: recall_at_10
value: 16.168
- type: recall_at_100
value: 30.875999999999998
- type: recall_at_1000
value: 62.686
- type: recall_at_3
value: 9.75
- type: recall_at_5
value: 12.413
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.828
- type: map_at_10
value: 53.239000000000004
- type: map_at_100
value: 54.035999999999994
- type: map_at_1000
value: 54.067
- type: map_at_3
value: 49.289
- type: map_at_5
value: 51.784
- type: mrr_at_1
value: 42.497
- type: mrr_at_10
value: 55.916999999999994
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.516999999999996
- type: mrr_at_3
value: 52.800000000000004
- type: mrr_at_5
value: 54.722
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 60.437
- type: ndcg_at_100
value: 63.731
- type: ndcg_at_1000
value: 64.41799999999999
- type: ndcg_at_3
value: 53.230999999999995
- type: ndcg_at_5
value: 57.26
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.47
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.724999999999998
- type: precision_at_5
value: 16.593
- type: recall_at_1
value: 37.828
- type: recall_at_10
value: 79.538
- type: recall_at_100
value: 93.646
- type: recall_at_1000
value: 98.72999999999999
- type: recall_at_3
value: 61.134
- type: recall_at_5
value: 70.377
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.548
- type: map_at_10
value: 84.466
- type: map_at_100
value: 85.10600000000001
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 81.57600000000001
- type: map_at_5
value: 83.399
- type: mrr_at_1
value: 81.24
- type: mrr_at_10
value: 87.457
- type: mrr_at_100
value: 87.574
- type: mrr_at_1000
value: 87.575
- type: mrr_at_3
value: 86.507
- type: mrr_at_5
value: 87.205
- type: ndcg_at_1
value: 81.25
- type: ndcg_at_10
value: 88.203
- type: ndcg_at_100
value: 89.457
- type: ndcg_at_1000
value: 89.563
- type: ndcg_at_3
value: 85.465
- type: ndcg_at_5
value: 87.007
- type: precision_at_1
value: 81.25
- type: precision_at_10
value: 13.373
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.417
- type: precision_at_5
value: 24.556
- type: recall_at_1
value: 70.548
- type: recall_at_10
value: 95.208
- type: recall_at_100
value: 99.514
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.214
- type: recall_at_5
value: 91.696
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.04822095496839
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.30778476474675
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 11.766
- type: map_at_100
value: 13.904
- type: map_at_1000
value: 14.216999999999999
- type: map_at_3
value: 8.245
- type: map_at_5
value: 9.92
- type: mrr_at_1
value: 23.0
- type: mrr_at_10
value: 33.78
- type: mrr_at_100
value: 34.922
- type: mrr_at_1000
value: 34.973
- type: mrr_at_3
value: 30.2
- type: mrr_at_5
value: 32.565
- type: ndcg_at_1
value: 23.0
- type: ndcg_at_10
value: 19.863
- type: ndcg_at_100
value: 28.141
- type: ndcg_at_1000
value: 33.549
- type: ndcg_at_3
value: 18.434
- type: ndcg_at_5
value: 16.384
- type: precision_at_1
value: 23.0
- type: precision_at_10
value: 10.39
- type: precision_at_100
value: 2.235
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 21.025
- type: recall_at_100
value: 45.324999999999996
- type: recall_at_1000
value: 71.675
- type: recall_at_3
value: 10.440000000000001
- type: recall_at_5
value: 14.64
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.96178184892842
- type: cos_sim_spearman
value: 79.6487740813199
- type: euclidean_pearson
value: 82.06661161625023
- type: euclidean_spearman
value: 79.64876769031183
- type: manhattan_pearson
value: 82.07061164575131
- type: manhattan_spearman
value: 79.65197039464537
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.15305604100027
- type: cos_sim_spearman
value: 74.27447427941591
- type: euclidean_pearson
value: 80.52737337565307
- type: euclidean_spearman
value: 74.27416077132192
- type: manhattan_pearson
value: 80.53728571140387
- type: manhattan_spearman
value: 74.28853605753457
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.44386080639279
- type: cos_sim_spearman
value: 84.17947648159536
- type: euclidean_pearson
value: 83.34145388129387
- type: euclidean_spearman
value: 84.17947648159536
- type: manhattan_pearson
value: 83.30699061927966
- type: manhattan_spearman
value: 84.18125737380451
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.57392220985612
- type: cos_sim_spearman
value: 78.80745014464101
- type: euclidean_pearson
value: 80.01660371487199
- type: euclidean_spearman
value: 78.80741240102256
- type: manhattan_pearson
value: 79.96810779507953
- type: manhattan_spearman
value: 78.75600400119448
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.85421063026625
- type: cos_sim_spearman
value: 87.55320285299192
- type: euclidean_pearson
value: 86.69750143323517
- type: euclidean_spearman
value: 87.55320284326378
- type: manhattan_pearson
value: 86.63379169960379
- type: manhattan_spearman
value: 87.4815029877984
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.31314130411842
- type: cos_sim_spearman
value: 85.3489588181433
- type: euclidean_pearson
value: 84.13240933463535
- type: euclidean_spearman
value: 85.34902871403281
- type: manhattan_pearson
value: 84.01183086503559
- type: manhattan_spearman
value: 85.19316703166102
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.09979781689536
- type: cos_sim_spearman
value: 88.87813323759015
- type: euclidean_pearson
value: 88.65413031123792
- type: euclidean_spearman
value: 88.87813323759015
- type: manhattan_pearson
value: 88.61818758256024
- type: manhattan_spearman
value: 88.81044100494604
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30693258111531
- type: cos_sim_spearman
value: 62.195516523251946
- type: euclidean_pearson
value: 62.951283701049476
- type: euclidean_spearman
value: 62.195516523251946
- type: manhattan_pearson
value: 63.068322281439535
- type: manhattan_spearman
value: 62.10621171028406
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.27092833763909
- type: cos_sim_spearman
value: 84.84429717949759
- type: euclidean_pearson
value: 84.8516966060792
- type: euclidean_spearman
value: 84.84429717949759
- type: manhattan_pearson
value: 84.82203139242881
- type: manhattan_spearman
value: 84.8358503952945
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.10290863981409
- type: mrr
value: 95.31168450286097
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.161
- type: map_at_10
value: 62.138000000000005
- type: map_at_100
value: 62.769
- type: map_at_1000
value: 62.812
- type: map_at_3
value: 59.111000000000004
- type: map_at_5
value: 60.995999999999995
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 63.504000000000005
- type: mrr_at_100
value: 64.036
- type: mrr_at_1000
value: 64.08
- type: mrr_at_3
value: 61.278
- type: mrr_at_5
value: 62.778
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 66.678
- type: ndcg_at_100
value: 69.415
- type: ndcg_at_1000
value: 70.453
- type: ndcg_at_3
value: 61.755
- type: ndcg_at_5
value: 64.546
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 52.161
- type: recall_at_10
value: 79.156
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 66.43299999999999
- type: recall_at_5
value: 73.272
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.30034785910676
- type: cos_sim_f1
value: 90.28629856850716
- type: cos_sim_precision
value: 92.36401673640168
- type: cos_sim_recall
value: 88.3
- type: dot_accuracy
value: 99.81287128712871
- type: dot_ap
value: 95.30034785910676
- type: dot_f1
value: 90.28629856850716
- type: dot_precision
value: 92.36401673640168
- type: dot_recall
value: 88.3
- type: euclidean_accuracy
value: 99.81287128712871
- type: euclidean_ap
value: 95.30034785910676
- type: euclidean_f1
value: 90.28629856850716
- type: euclidean_precision
value: 92.36401673640168
- type: euclidean_recall
value: 88.3
- type: manhattan_accuracy
value: 99.80990099009901
- type: manhattan_ap
value: 95.26880751950654
- type: manhattan_f1
value: 90.22177419354838
- type: manhattan_precision
value: 90.95528455284553
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.30034785910676
- type: max_f1
value: 90.28629856850716
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.518662504351184
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96168178378587
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.04862593471896
- type: mrr
value: 52.97238402936932
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.092545236479946
- type: cos_sim_spearman
value: 31.599851000175498
- type: dot_pearson
value: 30.092542723901676
- type: dot_spearman
value: 31.599851000175498
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.189
- type: map_at_10
value: 1.662
- type: map_at_100
value: 9.384
- type: map_at_1000
value: 22.669
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.9039999999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 81.01899999999999
- type: mrr_at_100
value: 81.01899999999999
- type: mrr_at_1000
value: 81.01899999999999
- type: mrr_at_3
value: 79.333
- type: mrr_at_5
value: 80.733
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 65.913
- type: ndcg_at_100
value: 51.895
- type: ndcg_at_1000
value: 46.967
- type: ndcg_at_3
value: 65.49199999999999
- type: ndcg_at_5
value: 66.69699999999999
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.66
- type: precision_at_1000
value: 21.124000000000002
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.189
- type: recall_at_10
value: 1.913
- type: recall_at_100
value: 12.601999999999999
- type: recall_at_1000
value: 44.296
- type: recall_at_3
value: 0.605
- type: recall_at_5
value: 1.018
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.701
- type: map_at_10
value: 10.445
- type: map_at_100
value: 17.324
- type: map_at_1000
value: 19.161
- type: map_at_3
value: 5.497
- type: map_at_5
value: 7.278
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 45.534
- type: mrr_at_100
value: 45.792
- type: mrr_at_1000
value: 45.806999999999995
- type: mrr_at_3
value: 37.755
- type: mrr_at_5
value: 43.469
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 26.235000000000003
- type: ndcg_at_100
value: 39.17
- type: ndcg_at_1000
value: 51.038
- type: ndcg_at_3
value: 23.625
- type: ndcg_at_5
value: 24.338
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.701
- type: recall_at_10
value: 17.997
- type: recall_at_100
value: 51.766999999999996
- type: recall_at_1000
value: 87.863
- type: recall_at_3
value: 6.295000000000001
- type: recall_at_5
value: 9.993
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 73.3474
- type: ap
value: 15.393431414459924
- type: f1
value: 56.466681887882416
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.062818336163
- type: f1
value: 62.11230840463252
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.464892820845115
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.15962329379508
- type: cos_sim_ap
value: 74.73674057919256
- type: cos_sim_f1
value: 68.81245642574947
- type: cos_sim_precision
value: 61.48255813953488
- type: cos_sim_recall
value: 78.12664907651715
- type: dot_accuracy
value: 86.15962329379508
- type: dot_ap
value: 74.7367634988281
- type: dot_f1
value: 68.81245642574947
- type: dot_precision
value: 61.48255813953488
- type: dot_recall
value: 78.12664907651715
- type: euclidean_accuracy
value: 86.15962329379508
- type: euclidean_ap
value: 74.7367761466634
- type: euclidean_f1
value: 68.81245642574947
- type: euclidean_precision
value: 61.48255813953488
- type: euclidean_recall
value: 78.12664907651715
- type: manhattan_accuracy
value: 86.21326816474935
- type: manhattan_ap
value: 74.64416473733951
- type: manhattan_f1
value: 68.80924855491331
- type: manhattan_precision
value: 61.23456790123457
- type: manhattan_recall
value: 78.52242744063325
- type: max_accuracy
value: 86.21326816474935
- type: max_ap
value: 74.7367761466634
- type: max_f1
value: 68.81245642574947
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97620988085536
- type: cos_sim_ap
value: 86.08680845745758
- type: cos_sim_f1
value: 78.02793637114438
- type: cos_sim_precision
value: 73.11082699683736
- type: cos_sim_recall
value: 83.65414228518632
- type: dot_accuracy
value: 88.97620988085536
- type: dot_ap
value: 86.08681149437946
- type: dot_f1
value: 78.02793637114438
- type: dot_precision
value: 73.11082699683736
- type: dot_recall
value: 83.65414228518632
- type: euclidean_accuracy
value: 88.97620988085536
- type: euclidean_ap
value: 86.08681215460771
- type: euclidean_f1
value: 78.02793637114438
- type: euclidean_precision
value: 73.11082699683736
- type: euclidean_recall
value: 83.65414228518632
- type: manhattan_accuracy
value: 88.88888888888889
- type: manhattan_ap
value: 86.02916327562438
- type: manhattan_f1
value: 78.02063045516843
- type: manhattan_precision
value: 73.38851947346994
- type: manhattan_recall
value: 83.2768709578072
- type: max_accuracy
value: 88.97620988085536
- type: max_ap
value: 86.08681215460771
- type: max_f1
value: 78.02793637114438
---
# jina-embeddings-v2-base-en-GGUF
**Model creator**: [jinaai](https://huggingface.co/jinaai)<br/>
**Original model**: [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-en` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset.
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
Additionally, we provide the following embedding models:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): Chinese-English Bilingual embeddings.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): German-English Bilingual embeddings.
- [`jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es): Spanish-English Bilingual embeddings.
## Data & Parameters
Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923)
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', 'What is the current weather like today?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-small-en')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
```python
!pip install transformers
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-en", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'What is the current weather like today?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers (or SentencTransformers) Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Plans
1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese.
2. Multimodal embedding models enable Multimodal RAG applications.
3. High-performt rerankers.
## Trouble Shooting
**Loading of Model Code failed**
If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized.
This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model:
```bash
Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-en were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ...
```
**User is not logged into Huggingface**
The model is only availabe under [gated access](https://huggingface.co/docs/hub/models-gated).
This means you need to be logged into huggingface load load it.
If you receive the following error, you need to provide an access token, either by using the huggingface-cli or providing the token via an environment variable as described above:
```bash
OSError: jinaai/jina-embeddings-v2-base-en is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@misc{günther2023jina,
title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents},
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao},
year={2023},
eprint={2310.19923},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
gldevelops/Llama-3.2-1B-Instruct-sensitivity
|
gldevelops
| 2024-11-01T01:42:58Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-31T10:35:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF
|
mradermacher
| 2024-11-01T01:34:11Z | 60 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:renyiyu/chinese-alpaca-2-7b-dpo-v0.1",
"base_model:quantized:renyiyu/chinese-alpaca-2-7b-dpo-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T23:17:42Z |
---
base_model: renyiyu/chinese-alpaca-2-7b-dpo-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/renyiyu/chinese-alpaca-2-7b-dpo-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.Q8_0.gguf) | Q8_0 | 7.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/chinese-alpaca-2-7b-dpo-v0.1-GGUF/resolve/main/chinese-alpaca-2-7b-dpo-v0.1.f16.gguf) | f16 | 14.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3.2-3B-Apex-GGUF
|
mradermacher
| 2024-11-01T01:30:10Z | 119 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T14:29:39Z |
---
base_model: bunnycore/Llama-3.2-3B-Apex
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Apex
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF/resolve/main/Llama-3.2-3B-Apex.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3.2-3B-Apex-i1-GGUF
|
mradermacher
| 2024-11-01T01:30:08Z | 30 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Apex",
"base_model:quantized:bunnycore/Llama-3.2-3B-Apex",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T00:57:57Z |
---
base_model: bunnycore/Llama-3.2-3B-Apex
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Apex
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Apex-i1-GGUF/resolve/main/Llama-3.2-3B-Apex.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/llama213bTimeBook-GGUF
|
mradermacher
| 2024-11-01T01:21:08Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation",
"en",
"base_model:Jimmyhd/llama213bTimeBook",
"base_model:quantized:Jimmyhd/llama213bTimeBook",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T00:30:27Z |
---
base_model: Jimmyhd/llama213bTimeBook
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jimmyhd/llama213bTimeBook
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama213bTimeBook-GGUF/resolve/main/llama213bTimeBook.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
autoprogrammer/CulturaX-zh-unsupervised-2
|
autoprogrammer
| 2024-11-01T01:18:46Z | 140 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T01:12:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Chili_Dog_8B-GGUF
|
MaziyarPanahi
| 2024-11-01T01:16:24Z | 34 | 0 | null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:FourOhFour/Chili_Dog_8B",
"base_model:quantized:FourOhFour/Chili_Dog_8B",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T00:47:19Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Chili_Dog_8B-GGUF
base_model: FourOhFour/Chili_Dog_8B
inference: false
model_creator: FourOhFour
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Chili_Dog_8B-GGUF](https://huggingface.co/MaziyarPanahi/Chili_Dog_8B-GGUF)
- Model creator: [FourOhFour](https://huggingface.co/FourOhFour)
- Original model: [FourOhFour/Chili_Dog_8B](https://huggingface.co/FourOhFour/Chili_Dog_8B)
## Description
[MaziyarPanahi/Chili_Dog_8B-GGUF](https://huggingface.co/MaziyarPanahi/Chili_Dog_8B-GGUF) contains GGUF format model files for [FourOhFour/Chili_Dog_8B](https://huggingface.co/FourOhFour/Chili_Dog_8B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
Kerneld/roberta-base-klue-ynat-classification
|
Kerneld
| 2024-11-01T01:15:51Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-01T01:15:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF
|
featherless-ai-quants
| 2024-11-01T01:05:52Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8",
"base_model:quantized:ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T00:35:24Z |
---
base_model: ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# ChaoticNeutrals/Hathor_Respawn-L3-8B-v0.8 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-GGUF/blob/main/ChaoticNeutrals-Hathor_Respawn-L3-8B-v0.8-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-ai-quants/bunnycore-Cognitron-8B-GGUF
|
featherless-ai-quants
| 2024-11-01T01:04:52Z | 8 | 0 | null |
[
"gguf",
"text-generation",
"base_model:bunnycore/Cognitron-8B",
"base_model:quantized:bunnycore/Cognitron-8B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-11-01T00:40:54Z |
---
base_model: bunnycore/Cognitron-8B
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# bunnycore/Cognitron-8B GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [bunnycore-Cognitron-8B-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [bunnycore-Cognitron-8B-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [bunnycore-Cognitron-8B-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [bunnycore-Cognitron-8B-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [bunnycore-Cognitron-8B-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [bunnycore-Cognitron-8B-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [bunnycore-Cognitron-8B-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [bunnycore-Cognitron-8B-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [bunnycore-Cognitron-8B-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [bunnycore-Cognitron-8B-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [bunnycore-Cognitron-8B-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/bunnycore-Cognitron-8B-GGUF/blob/main/bunnycore-Cognitron-8B-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF
|
mradermacher
| 2024-11-01T00:59:56Z | 187 | 2 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"maldv/badger-writer-llama-3-8b",
"vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B",
"Orenguteng/Llama-3-8B-Lexi-Uncensored",
"abacusai/Llama-3-Smaug-8B",
"en",
"base_model:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"base_model:quantized:ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T04:29:19Z |
---
base_model: ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- maldv/badger-writer-llama-3-8b
- vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
- Orenguteng/Llama-3-8B-Lexi-Uncensored
- abacusai/Llama-3-Smaug-8B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZeroXClem/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-GGUF/resolve/main/Llama-3-Aetheric-Hermes-Lexi-Smaug-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
autoprogrammer/CulturaX-zh-unsupervised-2000
|
autoprogrammer
| 2024-11-01T00:59:43Z | 196 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T00:57:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
piotrekgrl/llama381binstruct_summarize_short_merged
|
piotrekgrl
| 2024-11-01T00:59:13Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-01T00:55:42Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf
|
RichardErkhov
| 2024-11-01T00:53:39Z | 7 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T20:32:27Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Starcannon-Unleashed-12B-v1.0 - GGUF
- Model creator: https://huggingface.co/VongolaChouko/
- Original model: https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Starcannon-Unleashed-12B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q2_K.gguf) | Q2_K | 4.46GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K.gguf) | Q3_K | 5.67GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [Starcannon-Unleashed-12B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [Starcannon-Unleashed-12B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_0.gguf) | Q4_0 | 6.59GB |
| [Starcannon-Unleashed-12B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K.gguf) | Q4_K | 6.96GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [Starcannon-Unleashed-12B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q4_1.gguf) | Q4_1 | 7.26GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_0.gguf) | Q5_0 | 7.93GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K.gguf) | Q5_K | 8.13GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [Starcannon-Unleashed-12B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q5_1.gguf) | Q5_1 | 8.61GB |
| [Starcannon-Unleashed-12B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q6_K.gguf) | Q6_K | 9.37GB |
| [Starcannon-Unleashed-12B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/VongolaChouko_-_Starcannon-Unleashed-12B-v1.0-gguf/blob/main/Starcannon-Unleashed-12B-v1.0.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
base_model:
- nothingiisreal/MN-12B-Starcannon-v3
- MarinaraSpaghetti/NemoMix-Unleashed-12B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---

Starcannon-Unleashed-12B-v1.0-GGUF
==================================
## Quantized
**GGUF:**
[VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF)
[mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF)
[bartowski/Starcannon-Unleashed-12B-v1.0-GGUF](https://huggingface.co/bartowski/Starcannon-Unleashed-12B-v1.0-GGUF)
HUGE THANKS TO [mradermacher](https://huggingface.co/mradermacher)!! ( ´•̥̥̥o•̥̥̥`)♡(˘̩̩̩̩̩̩ ⌂ ˘̩̩̩̩̩̩) Gosh dang, the fella is fast, I was shook! XD, and to the GOAT, the awesome [bartowski](https://huggingface.co/bartowski)! For their GGUF quantizations.
I was only able to test the model using Q6_K with 24576 context at most due to PC limitations, so please let me know how it fared for you. Hopefully it still works well with higher context!
Recommended settings are here: [**Settings**](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0#instruct)
## Sample Output

## Introduction
**WARNING: Ramblings incoming. Please continue scrolling down if you wish to skip the boring part ʱªʱªʱª(ᕑᗢूᓫ∗)**
Ohh boi, here we are! I'm very happy to share with you the result of countless hours bashing my head on the wall! *:・゚✧(=ఠ్ఠܫఠ్ఠ =)∫
To start up, I want to put a disclaimer. This is the first time I'm attempting to merge a model and I'm in no way an expert when it comes to coding. AT ALL. I believe I didn't understand what on earth I was looking at for like 70% of the time... Err, so there's that! I did test this model out rigorously after executing the merging codes, and so far I loved the results. I was honestly expecting the merge to absolutely fail and be totally incoherent, but thankfully not! The two days of not getting enough sleep is worth it ◝(˃̣̣̥▽˂̣̣̥)/
My goal was to hopefully create something that will get the best parts from each finetune/merge, where one model can cover for the other's weak points.
I am a VERY huge fan of [Starcannon v3](https://huggingface.co/nothingiisreal/MN-12B-Starcannon-v3) because of how in character its responses are. It just hits different. It's like the model is the character itself, not ACTING as the character. That's why it always feels sad whenever it starts deteriorating, like I'm observing my beloved character die. No matter what adjustment I did to the context, it won't stay coherent to reach 16K context. On the other hand, I love [NemoMix Unleashed](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B) for its awesome stability at much longer contexts and its nature to progress the story forward even without prompting. It feels nice that it can stay coherent and stable even after reaching past the context size I set. I also find its ability to read between the lines great. So I figured, why not just marry the two to get the best of both worlds?
I would honestly love to do this again if I can because there's one too many times I found something I like in another model and then on another and wished so desperately they would just marry each other and have kids! XD
So please let me know how it fared for my first attempt!
I also want to learn how to finetune myself in addition to merging, but I don't think my PC is capable enough to endure it. I think it almost croaked on me when I did this merge, and my SDD cried, so maybe I'll just do it some other time when I have free time and more resources to spend.
And thus, I was finally able to merge my favorite models after hours of research, tutorials, asking annoying questions to the community (that no one replied to (´;︵;`)), and coding hell. Here we are!
**°˖✧It's all ABSOLUTELY worth it!✧˖°**
## Instruct
Both ChatML and Mistral should work fine. Personally, I tested this using ChatML. I found that I like the model's responses better when I use this format. Try to test it out and observe which one you like best. :D
## Settings
I recommend using these settings:
[Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json](https://huggingface.co/VongolaChouko/Starcannon-Unleashed-12B-v1.0/blob/main/Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json)
**IMPORTANT: Open Silly Tavern and use "Master Import", which can be found under "A" tab — Advanced Formatting. Replace the "INSERT WORLD HERE" placeholders with the world/universe in which your character belongs to. If not applicable, just remove that part.**

Temperature 1.15 - 1.25 is good, but lower should also work well, as long as you also tweak the Min P and XTC to ensure the model won't choke. Play around with it to see what suits your taste.
This is a modified version of MarinaraSpaghetti's Mistral-Small-Correct.json, transformed into ChatML.
You can find the original version here: [MarinaraSpaghetti/SillyTavern-Settings](https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized)
## Tips
- Examples of Dialogue and First Message are very important. The model will copy the style you wrote in these sections. So for example, if you want short outputs, make Examples of Dialogue and First Message short, and if you want longer outputs, make sure your examples have full paragraphs, composed of several sentences.
- If your Examples of Dialogue and First Message are short/concise but the model still rambles, lower Temperature in small increments, but keep Min P and XTC as is first. Test the result and adjust them to your liking. If it still rambles make XTC Threshold higher.
- Utilize Author's Note In-chat @ Depth 2 as System if you want the instruction to have greater impact on the next response. If you want something exciting and spontaneous, you can try out this note I used when I tested out the model: "Scenario: Spontaneous. {{char}} has full autonomy to do anything they wish and progress the interaction in any way they like."
## Credits
A very huge thank you to [MarinaraSpaghetti](https://huggingface.co/MarinaraSpaghetti) and [Nothing is Real](https://huggingface.co/nothingiisreal)!! (灬^ω^灬)ノ~ ♡ (´。• ᵕ •。`) ♡
I really fell in love with your models and it inspired me to learn how to make this one, and boi was it worth it! °˖✧◝(TT▿TT)◜✧˖°
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the della_linear merge method using G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B as a base.
### Models Merged
The following models were included in the merge:
* G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
dtype: bfloat16
merge_method: della_linear
parameters:
epsilon: 0.05
int8_mask: 1.0
lambda: 1.0
slices:
- sources:
- layer_range: [0, 40]
model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
parameters:
density: 0.65
weight: 0.4
- layer_range: [0, 40]
model: G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
parameters:
density: 0.55
weight: 0.6
```
|
mradermacher/Qwen2.5-7B-task2-i1-GGUF
|
mradermacher
| 2024-11-01T00:42:12Z | 23 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:allknowingroger/Qwen2.5-7B-task2",
"base_model:quantized:allknowingroger/Qwen2.5-7B-task2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-01T00:28:11Z |
---
base_model: allknowingroger/Qwen2.5-7B-task2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwen2.5-7B-task2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-task2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-task2-i1-GGUF/resolve/main/Qwen2.5-7B-task2.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/culturalmixer-GGUF
|
mradermacher
| 2024-11-01T00:38:08Z | 16 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:kevin009/culturalmixer",
"base_model:quantized:kevin009/culturalmixer",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-10-31T23:17:55Z |
---
base_model: kevin009/culturalmixer
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kevin009/culturalmixer
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/culturalmixer-GGUF/resolve/main/culturalmixer.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF
|
mradermacher
| 2024-11-01T00:36:07Z | 89 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient",
"base_model:quantized:TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-31T14:28:04Z |
---
base_model: TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TroyDoesAI/BlackSheep-Llama3.2-3B-Context_Obedient
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-Context_Obedient-GGUF/resolve/main/BlackSheep-Llama3.2-3B-Context_Obedient.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF
|
featherless-ai-quants
| 2024-11-01T00:23:00Z | 6 | 0 | null |
[
"gguf",
"text-generation",
"base_model:netcat420/MFANNv0.20.12",
"base_model:quantized:netcat420/MFANNv0.20.12",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-31T23:50:43Z |
---
base_model: netcat420/MFANNv0.20.12
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# netcat420/MFANNv0.20.12 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| Q8_0 | [netcat420-MFANNv0.20.12-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q8_0.gguf) | 8145.11 MB |
| Q4_K_S | [netcat420-MFANNv0.20.12-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_S.gguf) | 4475.28 MB |
| Q2_K | [netcat420-MFANNv0.20.12-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q2_K.gguf) | 3031.86 MB |
| Q6_K | [netcat420-MFANNv0.20.12-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q6_K.gguf) | 6290.44 MB |
| Q3_K_M | [netcat420-MFANNv0.20.12-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_M.gguf) | 3832.74 MB |
| Q3_K_S | [netcat420-MFANNv0.20.12-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_S.gguf) | 3494.74 MB |
| Q3_K_L | [netcat420-MFANNv0.20.12-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q3_K_L.gguf) | 4121.74 MB |
| Q4_K_M | [netcat420-MFANNv0.20.12-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q4_K_M.gguf) | 4692.78 MB |
| Q5_K_S | [netcat420-MFANNv0.20.12-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_S.gguf) | 5339.90 MB |
| Q5_K_M | [netcat420-MFANNv0.20.12-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-Q5_K_M.gguf) | 5467.40 MB |
| IQ4_XS | [netcat420-MFANNv0.20.12-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/netcat420-MFANNv0.20.12-GGUF/blob/main/netcat420-MFANNv0.20.12-IQ4_XS.gguf) | 4276.62 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
marklicata/M365_demo_28k
|
marklicata
| 2024-11-01T00:22:26Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-31T22:08:04Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: M365_demo_28k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M365_demo_28k
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1227 | 1.0 | 3543 | 0.1402 |
| 0.0778 | 2.0 | 7086 | 0.1147 |
| 0.0355 | 3.0 | 10629 | 0.1038 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.