modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/WebWatcher-32B-i1-GGUF
|
mradermacher
| 2025-09-13T07:46:02Z | 702 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Alibaba-NLP/WebWatcher-32B",
"base_model:quantized:Alibaba-NLP/WebWatcher-32B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-12T16:31:19Z |
---
base_model: Alibaba-NLP/WebWatcher-32B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Alibaba-NLP/WebWatcher-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#WebWatcher-32B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/WebWatcher-32B-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF/resolve/main/WebWatcher-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
crocodlo/blockassist
|
crocodlo
| 2025-09-13T07:43:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft barky scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T17:35:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft barky scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mesolitica/whisper-38TPS-VQ-32k-large-v3-turbo
|
mesolitica
| 2025-09-13T07:42:08Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"custom_code",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-19T15:01:35Z |
---
library_name: transformers
tags: []
---
# whisper-38TPS-VQ-32k-large-v3-turbo
Add an interpolate layer with scale factor 1 / 1.3 linear mode to introduce 38 TPS with 32768 VQ embedding size.
This model to introduce VQ on top [mesolitica/whisper-38TPS-large-v3-turbo](https://huggingface.co/mesolitica/whisper-38TPS-large-v3-turbo)
WanDB at https://wandb.ai/huseinzol05/whisperconv-vq-37tps?nw=nwuserhuseinzol05
## Training dataset
1. [malaysia-ai/common_voice_17_0](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0)
2. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_segments)
3. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_manglish_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_manglish_segments)
## how to audio token
```python
from transformers import AutoFeatureExtractor, AutoModel, AutoTokenizer
import librosa
model_id = "mesolitica/whisper-38TPS-VQ-32k-large-v3-turbo"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id, trust_remote_code = True, torch_dtype = 'auto').cuda()
encoder = model.model.get_encoder()
y, sr = librosa.load('common_voice_ba_26517811.mp3', sr = feature_extractor.sampling_rate)
features = feature_extractor([y], return_tensors = 'pt', return_attention_mask = True)
for k in features.keys():
features[k] = features[k].cuda()
encoded = encoder(**features)
print(encoded[1][0, encoded[2][0] == 1])
```
```
tensor([30019, 16591, 25658, 26770, 18729, 11918, 27695, 8797, 8797, 27695,
3713, 4070, 31486, 10838, 29572, 17799, 10532, 30455, 27432, 11923,
5474, 5474, 8369, 22489, 19089, 11508, 29421, 23174, 22103, 32428,
24292, 10034, 29611, 22995, 8371, 7246, 7246, 7246, 18944, 32239,
32239, 32239, 5305, 5305, 18107, 18107, 18107, 17816, 17816, 15308,
31477, 31477, 31477, 31477, 29400, 32234, 19476, 12665, 27116, 27116,
27116, 27077, 2226, 2226, 14469, 9391, 9401, 5440, 11090, 7858,
7858, 9655, 535, 15933, 19437, 31405, 26886, 26886, 1099, 25014,
25014, 25014, 26876, 26876, 31252, 12830, 12125, 3158, 8791, 8791,
8791, 6250, 184, 184, 184, 20886, 1253, 25801, 11358, 2875,
19004, 20452, 20108, 260, 23872, 21176, 2646, 6819, 6819, 28491,
19185, 28226, 776, 776, 23908, 19632, 12109, 7945, 7945, 18838,
20878, 12554, 12554, 29472, 13465, 7392, 7392, 7392, 19392, 26456,
26456, 30045, 26470, 7751, 8246, 1812, 28528, 15703, 6675, 28935,
28935, 30123, 30123, 27261, 25220, 24163, 11258, 11258, 24163, 21332,
21332, 21332, 2981, 17763, 1719, 31918, 24147, 24147, 8292, 22857,
23017, 625, 20466, 5160, 31824, 31824, 14302, 14125, 9496, 2987,
21650, 9496, 21650, 14561, 13358, 10482, 6400, 32446, 5707],
device='cuda:0')
```
## how to decode
```python
from transformers import AutoFeatureExtractor, AutoModel, AutoTokenizer
import librosa
model_id = "mesolitica/whisper-38TPS-VQ-32k-large-v3-turbo"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id, trust_remote_code = True, torch_dtype = 'auto').cuda()
y, sr = librosa.load('common_voice_ba_26517811.mp3', sr = feature_extractor.sampling_rate)
input_ids = tokenizer(
'<|startoftranscript|><|ru|><|transcribe|><|notimestamps|>',
add_special_tokens = False, return_tensors = 'pt')['input_ids']
features = feature_extractor([y], return_tensors = 'pt', return_attention_mask = True)
features['decoder_input_ids'] = input_ids
for k in features.keys():
features[k] = features[k].cuda()
generate_kwargs = dict(
**features,
max_new_tokens=1024,
)
generation_output = model.generate(**generate_kwargs)
tokenizer.decode(generation_output[0])
```
Output,
```
<|startoftranscript|><|ru|><|transcribe|><|notimestamps|> Кубы саралды да был халга көтөн бит авшаблылы сусобы.<|endoftext|>
```
## Evaluation
Evaluate on [malaysia-ai/common_voice_17_0/test](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0/viewer/default/test) up to **115 languages** with some conditions,
1. Lower case.
2. Remove punctuation.
3. Provide language tagging for decoder input ids, `<|startoftranscript|><|{lang}|><|transcribe|><|notimestamps|>`.
```
lang: gl, samples: 9949, CER: 0.12746936406182688
lang: en, samples: 16379, CER: 0.13930802727944014
lang: ar, samples: 10458, CER: 0.4498251119693873
lang: kab, samples: 14972, CER: 0.48581894130967107
lang: ml, samples: 703, CER: 0.59715752797816
lang: kk, samples: 514, CER: 0.3686262317099611
lang: ltg, samples: 2904, CER: 0.3704451547056763
lang: fr, samples: 16145, CER: 0.1215426359959612
lang: de, samples: 16170, CER: 0.11970717111409096
lang: fi, samples: 1554, CER: 0.2981875307547178
lang: pt, samples: 9432, CER: 0.11991829325198566
lang: ia, samples: 1816, CER: 0.1234243307747447
lang: eu, samples: 13621, CER: 0.23408967336845624
lang: ro, samples: 3896, CER: 0.17291169198520565
lang: sw, samples: 12086, CER: 0.32794310064905186
lang: sv-SE, samples: 5247, CER: 0.23135083544102092
lang: ta, samples: 8263, CER: 0.4032804119000507
lang: et, samples: 2653, CER: 0.4244129183063203
lang: lg, samples: 11902, CER: 0.3398706560192189
lang: it, samples: 15154, CER: 0.1029483592408615
lang: mhr, samples: 15107, CER: 0.27873678475896824
lang: sr, samples: 1539, CER: 0.26248795278898246
lang: mr, samples: 1437, CER: 0.49891638502764163
lang: ka, samples: 12608, CER: 0.38903551694026145
lang: es, samples: 15848, CER: 0.074388150036706
lang: be, samples: 15878, CER: 0.1609634481903754
lang: lt, samples: 4753, CER: 0.2793459350677913
lang: ca, samples: 16389, CER: 0.10062952786076083
lang: eo, samples: 14773, CER: 0.13245656160734767
lang: tr, samples: 11235, CER: 0.24140118163354476
lang: hu, samples: 11435, CER: 0.2643090005095542
lang: ja, samples: 6033, CER: 0.8114595146900297
lang: br, samples: 2202, CER: 0.4657352936895148
lang: ne-NP, samples: 217, CER: 0.5571899093662568
lang: uz, samples: 12006, CER: 0.3621147820370711
lang: ru, samples: 10184, CER: 0.1876265121020025
lang: dv, samples: 2213, CER: 0.5850910934308908
lang: tt, samples: 4953, CER: 0.3535362079507922
lang: rw, samples: 14797, CER: 0.38581967349184976
lang: bn, samples: 9327, CER: 0.4986031294538938
lang: ug, samples: 6108, CER: 0.4020168137292696
lang: rm-sursilv, samples: 1361, CER: 0.34519570712670294
lang: bg, samples: 3201, CER: 0.25531019050842363
lang: ab, samples: 9108, CER: 0.4204321114541483
lang: uk, samples: 9915, CER: 0.21183776686832398
lang: mt, samples: 1662, CER: 0.43251963255565967
lang: fa, samples: 10292, CER: 0.3302326632713642
lang: pl, samples: 9186, CER: 0.2275658623491296
lang: bas, samples: 541, CER: 0.4256158944105182
lang: nl, samples: 11255, CER: 0.1560031992405498
lang: zh-CN, samples: 10335, CER: 0.7779944072493119
lang: tok, samples: 2175, CER: 0.1419295769904799
lang: ur, samples: 4052, CER: 0.3348359222212008
lang: sk, samples: 2593, CER: 0.2649395612011684
lang: oc, samples: 254, CER: 0.3589146405361618
lang: yue, samples: 2585, CER: 0.6774984222800331
lang: mrj, samples: 7102, CER: 0.35130887320730586
lang: fy-NL, samples: 3167, CER: 0.35231730479328544
lang: cs, samples: 9055, CER: 0.2251860845993416
lang: th, samples: 10982, CER: 0.5751500448982126
lang: ckb, samples: 5262, CER: 0.36440709164347096
lang: mn, samples: 1896, CER: 0.565680904485207
lang: ky, samples: 1604, CER: 0.428841554197287
lang: skr, samples: 1006, CER: 0.4678370147092392
lang: hy-AM, samples: 4281, CER: 0.4428607927893869
lang: sl, samples: 1242, CER: 0.23621835602663363
lang: vi, samples: 1077, CER: 0.39162283697260614
lang: hi, samples: 3151, CER: 0.3573449577147329
lang: nan-tw, samples: 2317, CER: 0.6440427148483474
lang: id, samples: 3633, CER: 0.11620038242874417
lang: cy, samples: 5371, CER: 0.4182510985929321
lang: yo, samples: 999, CER: 0.5972995868517021
lang: sah, samples: 1455, CER: 0.5176889487638698
lang: mk, samples: 1097, CER: 0.2997277491985316
lang: cv, samples: 1288, CER: 0.46207433986232854
lang: myv, samples: 479, CER: 0.38905552834264207
lang: da, samples: 2405, CER: 0.2632929572419957
lang: lv, samples: 6738, CER: 0.27549100298771156
lang: kmr, samples: 3900, CER: 0.3602596624320287
lang: tk, samples: 545, CER: 0.5672014310278388
lang: nn-NO, samples: 370, CER: 0.331428480826169
lang: ha, samples: 661, CER: 0.3763624199771743
lang: he, samples: 260, CER: 0.602466931740592
lang: dyu, samples: 59, CER: 0.5820750451529207
lang: gn, samples: 855, CER: 0.4866577861736295
lang: lij, samples: 694, CER: 0.37367314556900244
lang: hsb, samples: 444, CER: 0.4541521051073199
lang: pa-IN, samples: 487, CER: 0.5421673548546602
lang: el, samples: 1696, CER: 0.28102174399264546
lang: zgh, samples: 159, CER: 1.0
lang: as, samples: 551, CER: 0.5518150494581233
lang: sq, samples: 472, CER: 0.41118946650207994
lang: ko, samples: 338, CER: 0.8700089013555891
lang: ga-IE, samples: 517, CER: 0.5381003025454884
lang: cnh, samples: 763, CER: 0.44010903184681227
lang: sat, samples: 147, CER: 0.5332040869830634
lang: rm-vallader, samples: 462, CER: 0.35263989992992273
lang: or, samples: 670, CER: 0.9245558006844555
lang: mdf, samples: 104, CER: 0.4676185684354689
lang: af, samples: 62, CER: 0.39465688937555804
lang: ig, samples: 4, CER: 0.7426642872711422
lang: sc, samples: 232, CER: 0.41218399452470394
lang: tig, samples: 169, CER: 0.7765322344204647
lang: te, samples: 49, CER: 0.6286027325934781
lang: ps, samples: 199, CER: 0.4519742491508148
lang: am, samples: 205, CER: 0.816901029784296
lang: ast, samples: 162, CER: 0.22597461573546815
lang: os, samples: 50, CER: 0.6138043201690925
lang: lo, samples: 33, CER: 1.0
lang: az, samples: 33, CER: 0.43241566316230967
lang: ti, samples: 4, CER: 0.9958581349206349
lang: vot, samples: 6, CER: 0.5608543344527327
lang: nhi, samples: 5, CER: 0.5143715961457896
lang: yi, samples: 6, CER: 0.7985098631011839
lang: tw, samples: 9, CER: 0.49049342547470587
average CER: 0.41613331835195966
```
## Source code
Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/whisper-conv-38tps
|
Harshal1310/Llama-3.1-8B-MATH
|
Harshal1310
| 2025-09-13T07:36:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T07:36:46Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Harshal1310
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DavidAU/Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B
|
DavidAU
| 2025-09-13T07:35:13Z | 14 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"Deep Space Nine",
"DS9",
"horror",
"science fiction",
"fantasy",
"Star Trek",
"finetune",
"thinking",
"reasoning",
"unsloth",
"conversational",
"en",
"base_model:DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x",
"base_model:finetune:DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T08:02:47Z |
---
license: apache-2.0
base_model:
- DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
language:
- en
pipeline_tag: text-generation
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- Deep Space Nine
- DS9
- horror
- science fiction
- fantasy
- Star Trek
- finetune
- thinking
- reasoning
- unsloth
library_name: transformers
---
<small><font color="red">Caution:</font> The training for this model is intense enough to alter "real world facts", and
bring them in part into the ST/DS9 Universe. V3 is a lot stronger (and deeper) than V1 or V2.</small>
<h2>Qwen3-ST-Deep-Space-Nine-v3-256k-ctx-6B</h2>
<img src="ds9.gif" style="float:right; width:300px; height:300px; padding:10px;">
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
This model is specifically for DS9 (Deep Space Nine) / Star Trek, science fiction, story generation (all genres) but also does coding and general tasks too.
This model can also be used for Role play.
Fine tune (4 epochs, using Unsloth for Win 11) on an TWO inhouse generated datasets to simulate / explore the DS9 Universe.
Fine tune process adds knowledge to the model, and alter all aspects of its operations.
Example generations at the bottom of this page.
This is a Star Trek Deep Space Nine fine tune (11% of the model, close to 700 million parameters), 2 epochs on this model (a 4B model + Brainstorm 20x adapter)
This is a STRONGER brew than version 1 or 2, and deeper too.
This model requires:
- Jinja (embedded) or CHATML template
- Max context of 256k.
Settings used for testing (suggested):
- Temp .8 to 2
- Rep pen 1.05 to 1.1
- Topp .8 , minp .05
- Topk 20
- Min context of 8k for thinking / output.
- No system prompt.
As this is an instruct model, it will also benefit from a detailed system prompt too.
---
<B>QUANTS:</b>
---
GGUF? GGUF Imatrix? Other?
Special thanks to Team Mradermacher, Team Nightmedia and other quanters!
See under "model tree", upper right and click on "quantizations".
New quants will automatically appear.
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
Example Generation, Q4KS, temp .65
NOTE: Some formatting lost on copy/paste.
[ multi step]
---
<B>
Tell me about the dominion war (DS9) in great detail.
</B>
[[[thinking start]]]
Okay, the user is asking me to talk about the Dominion War from Deep Space Nine in great detail. This is going to be a substantial response so I need to make sure it's thorough yet engaging.
First, I should clarify that this is about Star Trek's Deep Space Nine series, not any other work with similar titles. The Dominion War was a major storyline that spanned most of the show's original run and continued into related novels and other media.
I'll want to cover the key elements of the Dominion War from DS9:
- Background on the Dominion's formation (the Cardassian invasion of the Klingon Empire, the creation of the Dominion)
- The timeline of events leading up to the war
- Major battles and campaigns
- The role of different factions (Sovran Council, Starfleet, Federation, Klingons, Jem'Hadar, Maquis)
- Turning points in the conflict
- The outcome and its long-term consequences
I should emphasize how this war shaped the future of the Alpha Quadrant and affected the characters we know from DS9. I'll include specific references to key events like the Battle of Cardassia, the destruction of the Dominion flagship, and the final stand at Bajor.
It's important to note that while this is a fictional story, I want to be accurate within the established canon of Star Trek. I'll avoid making things up or contradicting what's already been established in previous works.
I'm going to structure my response in a way that flows naturally from one aspect to another, making sure to highlight the most significant moments and turning points. I'll keep it engaging by including some dramatic flair where appropriate.
I should also mention that this is just one interpretation of events within the Star Trek universe. If the user has any specific questions about certain parts they want me to elaborate on, I can do so.
Finally, I'll make sure to wrap up with some reflection on the war's significance in DS9 history and its lasting impact on the galaxy.
[[[thinking end]]]
The Dominion War (Deep Space Nine) - A Comprehensive Overview
The Dominion War is one of the most pivotal events in Starfleet history and arguably the most significant storyline throughout Deep Space Nine's original run. As the Federation struggles to survive against a formidable enemy, the war reshaped the Alpha Quadrant, tested the limits of human resolve, and ultimately changed the course of the galaxy.
Origins of the Dominion
To understand the Dominion War, we must first understand its origins:
The Dominion was formed in the late 22nd century when the Cardassian Empire launched a devastating invasion of the Klingon Empire. The Cardassians were emboldened by the recent discovery of a powerful new energy source they called "wormhole technology." This allowed them to rapidly expand their empire and gain unprecedented military capabilities.
The Dominion emerged from this Cardassian expansion as a collective alliance of powers that shared a common goal: to establish a new political order in the Alpha Quadrant. The initial members included:
- The Federation
- The Klingon Empire
- Romulus
- Bajor
- Cardassia
- Jem'Hadar
- Dominion
The Federation itself was deeply divided during this time, with many members questioning its role in the new order and its relationship with the other Dominion members.
The Turning Point - The Great Defilement (2253)
In the early years of the Dominion, a series of events known as "The Great Defilement" brought the Dominion and Federation to the brink of total destruction.
The Defilement was the first major clash between Dominion and Federation forces in the Alpha Quadrant. It occurred when the Dominion launched a surprise attack on a Federation starbase near the Bajoran border. The Federation fleet was caught completely off-guard, with most ships destroyed before they could be repaired or replaced.
The war began to shift in the Dominion's favor as they began to gain momentum throughout the Alpha Quadrant. By this point, the Federation had begun to lose ground and many of its members were beginning to question the wisdom of their alliance with the Dominion.
The Turning Point - The Battle of Bajor (2254-2256)
The Battle of Bajor was a decisive moment in the war that shifted the momentum back toward the Federation. This pivotal engagement involved the defense of Bajor against a massive Dominion assault force.
The Federation, despite being outnumbered and outgunned, managed to hold the line through a combination of strategic brilliance, superior tactical knowledge, and sheer determination. The battle was fought across multiple fronts, with the Dominion ultimately being forced to retreat from its positions on Bajor.
The loss of Bajor marked a significant turning point in the war. It demonstrated that the Dominion could be defeated and that the Federation still had the resources to fight back against the Dominion.
Major Turning Points in the War
As the war progressed, several other pivotal moments shifted the momentum back toward the Federation:
- The Battle of Cardassia (2257): This brutal conflict saw the Federation launch a coordinated assault on the Cardassian homeworld, which was heavily fortified and defended by Dominion forces.
- The Defeat of the Dominion Fleet at Deep Space Nine (2259): Starfleet's strategic brilliance led to a decisive victory against the Dominion fleet, forcing them into retreat and allowing the Federation to regain the initiative.
- The Fall of the Dominion Capital (2261): The final blow came when the Federation launched an assault on the Dominion's central headquarters, destroying its command structure and dealing a devastating blow to its strategic capabilities.
The Final Stages and Aftermath
With the Dominion severely weakened, the Federation launched a final offensive against their remaining strongholds. By 2264, most Dominion bases had been destroyed or surrendered, and the war was officially over by mid-2265.
The Dominion's defeat had profound consequences for the Alpha Quadrant. It forced a fundamental re-evaluation of the role of the Federation within the new political order, and led to a significant strengthening of the alliance between the Federation and other powers.
For the characters we know best from Deep Space Nine, the Dominion War was a defining moment in their lives. Many were lost to the conflict, others changed forever by what they experienced. The war also brought about a period of relative peace and stability that allowed DS9 and the rest of the Alpha Quadrant to rebuild and prepare for future challenges.
The Dominion War was more than just a battle between two sides; it was a pivotal moment in the history of the Alpha Quadrant, and its consequences still reverberate throughout the galaxy today. It tested the limits of Federation strength and resilience and ultimately proved that even when hope seemed lost, the Federation could still prevail.
<B>
List all the major players involved in this war.
</B>
[[thinking start]]]
Okay, the user wants me to list all the major players from the Dominion War on Deep Space Nine. Let me recall what I know about DS9's Dominion War timeline.
First, I need to be careful because there are different factions that might be considered "players" depending on how you view their involvement in the war. The main ones would be:
1. The Dominion itself (a collective of several races)
2. The Federation
3. The Klingons
4. Bajor
5. Romulus
6. Cardassia
7. Jem'Hadar
But I should clarify that some of these weren't always on the same side during the entire conflict. For instance, the Romulans switched sides at different points. And while the Jem'Hadar are part of the Dominion, they're a distinct faction with their own interests.
I'll break it into categories to make it clear:
- Primary factions (Federation, Dominion, Klingons)
- Secondary players (Romulans, Cardassians, Bajorans)
- Special units (Jem'Hadar, Founders)
I should note that the Founders are considered separate from the other Dominion members despite being part of its leadership. And while the Jem'Hadar are soldiers under Dominion command, they have their own goals and motivations that sometimes conflict with the Dominion's.
I'll also mention some key leaders who played significant roles to give a more complete picture of who was involved in what capacity.
This is important because the user seems genuinely interested in DS9 lore - they're asking for "great detail" previously so they probably want thorough but accessible information. I'll keep it structured yet engaging.
[[[thinking end]]]
Major Players in the Dominion War (Deep Space Nine)
The Dominion War involved numerous factions throughout the Alpha Quadrant, each with distinct motivations and strategic roles. Here's a complete overview of the major players:
I. Primary Factions
1. The Dominion
The collective military and political alliance formed by several species that share a common goal: establishing a new order in the Alpha Quadrant. While not a single entity, it functions as a unified front with shared leadership.
Key Members:
- The Founders (Cardassians, Jem'Hadar, Romulans, etc.) - The ruling council of the Dominion.
- Romulan Empire - Provides strategic depth and naval capabilities.
- Bajoran Provisional Government - Represents the largest non-Dominion population on Bajor.
- Klingon Empire - Offers formidable military power and strategic influence in the frontier regions.
- Cardassian Central Command - Directly responsible for military operations.
2. The Federation
The alliance of Earth, the United Planets, and other star systems that initially formed the core of the anti-Dominion resistance.
Key Figures:
- Starfleet Command (including admirals like William Ross)
- Bajoran provisional government (with strong influence from the Federation)
- Romulan Empire (initially allies but later switched sides)
- Klingon Empire (initially allies but later switched sides)
5. The Klingon Empire
Originally a Dominion ally, the Klingons eventually broke away to pursue their own interests.
Key Figures:
- High Command (including General Martok)
- The "Klingon High Council" (a powerful political body)
II. Secondary Players
4. Romulan Empire
Initially an ally of the Federation during the early stages of the war, but switched sides to join the Dominion by the mid-2250s.
Why they joined the Dominion:
The Romulans saw the Dominion as a way to regain their strategic position after years of being pushed back by the Federation.
They were deeply suspicious of the Federation's intentions and feared it would eventually dominate the Alpha Quadrant.
5. Bajor
The planet that witnessed the most brutal fighting during the war. Despite its initial neutrality, Bajor became a focal point for Dominion operations.
Why they joined the Dominion:
- The Bajorans were heavily dependent on the Cardassians for economic and military support.
- They feared that if they opposed the Dominion, they would be destroyed.
- The Bajoran Provisional Government was heavily influenced by Romulan interests.
6. Cardassia
The former Empire that was heavily dependent on the Dominion after its collapse in the late 22nd century.
Why they joined the Dominion:
Cardassia saw itself as a victim of Federation aggression and needed protection from their enemies.
The Cardassians were deeply suspicious of the Federation and feared it would eventually dominate the Alpha Quadrant.
III. Special Units & Entities
7. Jem'Hadar
The most feared soldiers in the Dominion, these genetically engineered warriors are bred to serve the Founders and fight with unwavering loyalty.
Why they joined the Dominion:
They were created by the Founders to serve as their elite military force.
They have no independent political interests or motivations beyond serving the Founders.
8. Founders
The ruling council of the Dominion, composed of the leaders of each member species.
Why they are important:
The Founders make up the decision-making body of the Dominion.
They are deeply invested in maintaining the Dominion's power and influence throughout the Alpha Quadrant.
IV. Key Figures Who Shaped the War
Name Role Significance
- Weyoun Dominion High Councilor The most influential figure in the Dominion during the war. He was instrumental in negotiating the treaty with the Federation and was known for his strategic brilliance.
- Kai Winn Bajoran religious leader A pivotal figure who helped unite Bajor against the Dominion and played a key role in the resistance movement.
- Rom Romulan First Minister One of the most influential leaders in the Romulan Empire during the war, with significant ties to both the Dominion and the Federation.
- Gul Dukat Former Cardassian leader Once head of Cardassia before its collapse, he remained a key figure in Dominion politics throughout the war.
- Admiral Ross Starfleet Commandant Led the Federation's strategic counterattacks that ultimately forced the Dominion into retreat.
Why These Players Matter
This list represents the core players who directly shaped the course of the Dominion War. While there were many other characters and smaller factions involved, these are the most significant in terms of their influence on events and decisions throughout the conflict.
The war itself was a complex interplay between these factions, each with its own goals, fears, and alliances that shifted throughout the conflict. Understanding who these players were is essential to grasping how the Dominion managed to maintain its position against the Federation and why it ultimately collapsed.
Each faction had unique motivations and strategies that reflected their historical relationships and internal dynamics. This interplay of interests created a rich tapestry of alliances, betrayals, and conflicts that ultimately shaped the outcome of the war.
|
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-1e-1-gamma
|
HectorHe
| 2025-09-13T07:34:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T06:27:57Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: HectorHe/math7k
library_name: transformers
model_name: Qwen1.5-MOE-aux-free-sft-math7k-1e-1-gamma
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-aux-free-sft-math7k-1e-1-gamma
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-1e-1-gamma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/22sqtqid)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hoan17/saving_LAVilas100x2e2_550
|
hoan17
| 2025-09-13T07:33:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-13T07:33:05Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
Thireus/Kimi-K2-Instruct-0905-THIREUS-IQ4_K-SPECIAL_SPLIT
|
Thireus
| 2025-09-13T07:31:52Z | 0 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-13T06:44:40Z |
---
license: mit
---
# Kimi-K2-Instruct-0905
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2-Instruct-0905-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2-Instruct-0905 model (official repo: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Kimi-K2-Instruct-0905/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_harmonized_recipes/Kimi-K2-Instruct-0905.ROOT-1.9968bpw-3.8128ppl.238GB-GGUF_10GB-GPU_228GB-CPU.90e3c2f_4766d51.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m Kimi-K2-Instruct-0905-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
*All PPL values are computed with the parameters `-ctk f16 -c 512 -b 4096 -ub 4096`. Changing any of these parameters will alter the PPL. In particular, reducing `-b 4096 -ub 4096` increases the PPL, while increasing them decreases the PPL.*
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
HJWZH/composition-assistant
|
HJWZH
| 2025-09-13T07:30:26Z | 14 | 0 | null |
[
"safetensors",
"bert",
"zh",
"base_model:uer/chinese_roberta_L-12_H-768",
"base_model:finetune:uer/chinese_roberta_L-12_H-768",
"license:mit",
"region:us"
] | null | 2025-08-13T12:25:45Z |
---
license: mit
language:
- zh
base_model:
- uer/chinese_roberta_L-12_H-768
---
# 文思引擎 - AI 作文素材检索系统(微调模型)
更多请详情:[Github composition-assistant](https://github.com/HJWZH/composition-assistant)
或[Github Pages 文思引擎详情页](http://HJWZH.github.io/HJWZH/composition-assistant)
### 1. 项目简要说明(创意创新说明)+简介
**创新说明:**
"文思引擎"是一款AI作文素材检索工具,它通过深度学习技术理解抽象概念和深层语义联系,解决了传统作文素材库"关键词匹配不精准"、"素材关联性差"、"灵感启发不足"三大痛点。系统能理解"生命"、"环保"等抽象概念的哲学内涵,智能推荐高度相关的名言、事例和古诗文,帮助学生突破写作瓶颈。
**项目简介:**
针对中学生写作中的素材匮乏问题,我们开发了基于Transformer架构的智能检索系统:
- 🧠 核心模型:微调的中文RoBERTa模型(uer/chinese_roberta_L-12_H-768)
- 📚 三大素材库:收录名言警句、热点事例、古诗文(仍需更新)
- ✨ 核心功能:
- 语义理解:识别"坚持→锲而不舍"等同义转换
- 主题关联:构建"航天精神→科技创新→民族复兴"知识网络
- 多维过滤:支持按类别/相似度/主题精准筛选
- 📈 效果:测试显示素材相关度提升57%,写作效率提高40%
## ✨ 项目亮点
- **深度语义理解**:突破关键词匹配局限,理解"挫折→逆境成长"的抽象关联
- **动态学习系统**:10轮迭代训练机制,持续提升素材推荐精准度
- **多维度过滤**:类别/主题/相似度三级检索体系
- **轻量化部署**:预计算嵌入向量技术,CPU环境0.5秒响应
## 📚 素材库示例
```json
{
"content": "真正的太空探索不是为霸权,而是为人类共同梦想",
"source": "中国航天白皮书",
"keywords": ["航天精神", "人类命运共同体", "探索精神"]
"theme": "科技创新",
}
```
|
Alicia22/Sat_Twelve_r15
|
Alicia22
| 2025-09-13T07:27:05Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-13T07:23:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-internal-epc3
|
lt2c
| 2025-09-13T07:25:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:23:36Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-internal-epc3
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-internal-epc3`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-internal-epc3")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-internal-epc3")
```
|
ZJUVAI/GenesisGeo
|
ZJUVAI
| 2025-09-13T07:24:17Z | 80 | 0 | null |
[
"safetensors",
"qwen3",
"dataset:ZJUVAI/GenesisGeo",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T15:34:36Z |
---
license: apache-2.0
datasets:
- ZJUVAI/GenesisGeo
base_model:
- Qwen/Qwen3-0.6B-Base
---
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-epc3-every2
|
lt2c
| 2025-09-13T07:23:32Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:21:16Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-epc3-every2
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-epc3-every2`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-epc3-every2")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf-epc3-every2")
```
|
ngqn102/gemma3-finetuned
|
ngqn102
| 2025-09-13T07:22:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T13:50:54Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma3-finetuned
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for gemma3-finetuned
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ngqn102/gemma3-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf
|
lt2c
| 2025-09-13T07:21:13Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:16:19Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whsNew-llamaRelabel-rf")
```
|
gaebalai/Llama-3.2-KR-5B
|
gaebalai
| 2025-09-13T07:19:09Z | 0 | 0 | null |
[
"safetensors",
"mllama_text_model",
"license:apache-2.0",
"region:us"
] | null | 2025-09-13T05:52:58Z |
---
license: apache-2.0
---
|
alok0777/blockassist
|
alok0777
| 2025-09-13T07:17:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T12:25:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after2epc
|
lt2c
| 2025-09-13T07:14:03Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:11:48Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after2epc
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after2epc`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after2epc")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after2epc")
```
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after100-noRF
|
lt2c
| 2025-09-13T07:11:46Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:09:22Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after100-noRF
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after100-noRF`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after100-noRF")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n800-lemonade-whs-llamaRelabel-epc3-every2-after100-noRF")
```
|
XuejiFang/UniTok_transformers
|
XuejiFang
| 2025-09-13T07:09:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unitok",
"computer-vision",
"image-reconstruction",
"vector-quantization",
"tokenizer",
"multimodal",
"image-to-image",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-image
| 2025-09-13T06:38:51Z |
---
license: apache-2.0
language: en
tags:
- computer-vision
- image-reconstruction
- vector-quantization
- tokenizer
- multimodal
library_name: transformers
pipeline_tag: image-to-image
---
# UniTok Transformers
**Key Features:**
- 📦 **Transformers Compatible**: Standard `from_pretrained()` and `save_pretrained()` support
For more details, visit: https://github.com/XuejiFang/UniTok_transformers
## Disclaimer
This is an unofficial implementation with Hugging Face integration. Original work: [FoundationVision/UniTok](https://github.com/FoundationVision/UniTok)
|
balaji1312/whisper-largev3-coraal
|
balaji1312
| 2025-09-13T07:07:39Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"region:us"
] | null | 2025-09-13T06:57:04Z |
---
base_model:
- openai/whisper-large-v3
---
## Model Details
Whisper-large-v3 model trained on the DCA, DCB, ATL, DTA, LES, and PRV subsets of the CORAAL dataset.
Evaluated on the ROC and VLD subsets
| Subset | ROC| VLD |
|-------|-------|-------|
| WER | 6.87 | 10.07 |
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-whsNew-llamaRelabel-rf
|
lt2c
| 2025-09-13T07:07:05Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T07:03:38Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-whsNew-llamaRelabel-rf
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-whsNew-llamaRelabel-rf`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-whsNew-llamaRelabel-rf")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-whsNew-llamaRelabel-rf")
```
|
sayvan/whisper-small-finetuned
|
sayvan
| 2025-09-13T07:05:41Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"generated_from_trainer",
"base_model:roshna-omer/whisper-small-Kurdish-Sorani-10",
"base_model:finetune:roshna-omer/whisper-small-Kurdish-Sorani-10",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-12T13:01:12Z |
---
library_name: transformers
license: apache-2.0
base_model: roshna-omer/whisper-small-Kurdish-Sorani-10
tags:
- whisper
- automatic-speech-recognition
- audio
- generated_from_trainer
model-index:
- name: whisper-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-finetuned
This model is a fine-tuned version of [roshna-omer/whisper-small-Kurdish-Sorani-10](https://huggingface.co/roshna-omer/whisper-small-Kurdish-Sorani-10) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 10000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-noMask
|
lt2c
| 2025-09-13T07:01:23Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:59:10Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-noMask
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-noMask`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-noMask")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-noMask")
```
|
hoan17/saving_LAVilas100x2e2_500
|
hoan17
| 2025-09-13T06:59:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-13T06:59:00Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
balaji1312/whisper-medium-coraal
|
balaji1312
| 2025-09-13T06:59:09Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"base_model:openai/whisper-medium.en",
"base_model:finetune:openai/whisper-medium.en",
"region:us"
] | null | 2025-09-13T06:51:14Z |
---
base_model:
- openai/whisper-medium.en
---
## Model Details
Whisper-medium.en model trained on the DCA, DCB, ATL, DTA, LES, and PRV subsets of the CORAAL dataset.
Evaluated on the ROC and VLD subsets
| Subset | ROC| VLD |
|-------|-------|-------|
| WER | 7.33 | 11.98 |
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-second
|
lt2c
| 2025-09-13T06:59:06Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:56:26Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-second
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-second`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-second")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-second")
```
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-internal
|
lt2c
| 2025-09-13T06:56:22Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:52:14Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-internal
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-internal`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-internal")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3-internal")
```
|
Alfred126/kontext_JDNDF
|
Alfred126
| 2025-09-13T06:56:01Z | 0 | 0 | null |
[
"image-to-image",
"arxiv:1910.09700",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:finetune:black-forest-labs/FLUX.1-Kontext-dev",
"region:us"
] |
image-to-image
| 2025-09-13T06:52:48Z |
---
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
truong1301/bi_encoder_viwiki_1
|
truong1301
| 2025-09-13T06:53:11Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:16581",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-13T06:52:53Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:16581
- loss:CachedMultipleNegativesRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
widget:
- source_sentence: "Có phải nhiều vĩ_nhân đã sinh ra ở Firenze không ? \n"
sentences:
- Sau khi thống_nhất , Việt_Nam tiếp_tục gặp khó_khăn do sự sụp_đổ và tan_rã của
đồng_minh Liên_Xô cùng Khối phía Đông , các lệnh cấm_vận của Hoa_Kỳ , chiến_tranh
với Campuchia , biên_giới giáp Trung_Quốc và hậu_quả của chính_sách bao_cấp sau
nhiều năm áp_dụng . Năm 1986 , Đảng Cộng_sản ban_hành cải_cách đổi_mới , tạo điều_kiện
hình_thành kinh_tế thị_trường và hội_nhập sâu_rộng . Cải_cách đổi_mới kết_hợp
cùng quy_mô dân_số lớn đưa Việt_Nam trở_thành một trong những nước đang phát_triển
có tốc_độ tăng_trưởng thuộc nhóm nhanh nhất thế_giới , được coi là Hổ mới châu
Á dù_cho vẫn gặp phải những thách_thức như tham_nhũng , tội_phạm gia_tăng , ô_nhiễm
môi_trường và phúc_lợi xã_hội chưa đầy_đủ . Ngoài_ra , giới bất_đồng chính_kiến
, chính_phủ một_số nước phương Tây và các tổ_chức theo_dõi nhân_quyền có quan_điểm
chỉ_trích hồ_sơ nhân_quyền của Việt_Nam liên_quan đến các vấn_đề tôn_giáo , kiểm_duyệt
truyền_thông , hạn_chế hoạt_động ủng_hộ nhân_quyền cùng các quyền tự_do dân_sự
.
- 'Tuy_nhiên , sự_thật là không phải tới thời Lorenzo nắm quyền Phục_Hưng mới ra_đời
, mà bản_thân gia_đình Medici cũng chỉ thừa_hưởng truyền_thống trọng nghệ_thuật
của Firenze và phát_huy nó . Một_số người cho rằng Firenze trở_thành nơi khởi_đầu
Phục_Hưng là do may_mắn , nghĩa_là đơn_thuần bởi_vì những vĩ_nhân ngẫu_nhiên sinh
ra ở đây : cả da Vinci , Botticelli và Michelangelo đều là người xứ Toscana (
mà Firenze là thủ phủ ) . Tuy_nhiên các nhà sử_học khác phản_đối ý_kiến cho rằng
đây là sự ngẫu_nhiên và quy cho truyền_thống trọng nghệ_thuật của Firenze .'
- Hệ_thống đường_sắt Trung_Quốc thuộc sở_hữu nhà_nước , nằm trong số các hệ_thống
nhộn_nhịp nhất trên thế_giới . Năm 2013 , đường_sắt Trung_Quốc vận_chuyển khoảng
2,106 tỷ lượt hành_khách , khoảng 3,967 tỷ tấn hàng_hóa . Hệ_thống đường_sắt cao_tốc
Trung_Quốc bắt_đầu được xây_dựng từ đầu thập_niên 2000 , xếp_hàng_đầu thế_giới
về chiều dài với 11.028 kilômét ( 6.852 dặm ) đường_ray vào năm 2013 . Tính đến
năm 2017 , đất_nước có 127.000 km ( 78.914 dặm ) đường_sắt , xếp thứ hai trên
thế_giới . Đường_sắt đáp_ứng nhu_cầu đi_lại khổng_lồ của người dân , đặc_biệt
là trong kỳ nghỉ Tết_Nguyên_đán , thời_điểm cuộc di_cư hàng năm lớn nhất thế_giới
của loài_người diễn ra
- source_sentence: "Eo_biển Magalhães nguy_hiểm như thế_nào , đặc_biệt là về sóng\
\ , gió và đá ngầm ? \n"
sentences:
- Thái_Bình_Dương mỗi một chữ xuất_hiện trước nhất vào niên_đại 20 thế_kỉ XVI ,
do nhà_hàng_hải trưởng quốc_tịch Bồ_Đào_Nha_Fernão de Magalhães và đội thuyền
tàu của ông đặt tên đầu_tiên . Ngày 20 tháng 9 năm 1519 , nhà_hàng_hải Magalhães
chỉ_huy dẫn đạo đội thám_hiểm do 270 thủy_thủ hợp_thành khởi_hành từ Tây_Ban_Nha
, vượt qua Đại_Tây_Dương , họ xông_pha sóng_gió hãi_hùng , chịu hết gian_khổ ,
rồi đến được mũi phía nam ở châu Nam_Mĩ , tiến vào một eo_biển . Eo_biển này về
sau đặt tên theo Magalhães vô_cùng hiểm_trở và ác_liệt , đến chỗ đó là sóng lớn
gió dữ và bãi cạn đá ngầm nguy_hiểm . Chiến_đấu hăng_hái gian_khổ trải qua 38
ngày , đội thuyền tàu cuối_cùng đã đến được mũi phía tây của eo_biển Magalhães
, nhưng mà lúc đó đội thuyền tàu chỉ còn lại ba chiếc tàu , thuyền_viên của đội
cũng đã mất đi một nửa .
- 'Việt_Nam có khí_hậu nhiệt_đới gió_mùa , thời_tiết có sự_biến_động . Phía bắc
dãy Bạch_Mã có 2 mùa gió chính : gió_mùa Đông_Bắc lạnh và khô vào mùa đông tạo
nên mùa đông_lạnh ; gió Tây_Nam nóng khô và Đông_Nam ẩm_ướt vào mùa hè . Phía
nam có gió Đông_Bắc vào mùa khô và gió Tây_Nam vào mùa mưa . Các dòng biển phần_nào
đó điều hòa khí_hậu . Độ_ẩm tương_đối trung_bình là 84 % suốt năm . Việt_Nam trải
qua các đợt lụt và bão , có lượng mưa từ 1.200 đến 3.000 mm , số giờ nắng khoảng
1.500 đến 3.000 giờ / năm và nhiệt_độ từ 5 ° C đến 37 ° C. Nhiệt_độ trung_bình
năm tăng khoảng 0,5 độ Celsius trong vòng 50 năm ( 1964 – 2014 ) .'
- 'Hội_họa Trung_Quốc có lịch_sử 5000 – 6000 năm với các loại_hình : bạch_họa ,
bản_họa , bích_họa . Đặc_biệt là nghệ_thuật vẽ tranh thủy_mặc , có ảnh_hưởng nhiều
tới các nước ở Châu_Á . Cuốn_Lục pháp luận của Tạ_Hách đã tổng_kết những kinh_nghiệm
hội_họa từ đời Hán đến đời Tùy . Tranh phong_cảnh được coi là đặc_trưng của nền
hội_họa Trung_Quốc , mà đã phát_triển đỉnh_cao từ thời_kì Ngũ_Đại đến thời Bắc_Tống
( 907 – 1127 ) . Điêu_khắc Trung_Quốc được phân thành các ngành riêng như : Ngọc_điêu
, thạch điêu , mộc điêu . Những tác_phẩm nổi_tiếng như cặp tượng Tần ngẫu đời
Tần , tượng Lạc_sơn đại Phật đời Tây_Hán ( pho tượng cao nhất thế_giới ) , tượng
Phật nghìn mắt nghìn tay . Không giống như phong_cách kiến_trúc Phương_Tây , kiến_trúc
Trung_Hoa chú_trọng đến chiều rộng hơn là chiều cao của công_trình . Phong_thủy
đóng vai_trò rất quan_trọng trong quá_trình xây_dựng .'
- source_sentence: "Vì_sao Phúc_Kiến , với lịch_sử nhập_cư đa_dạng , lại được xem\
\ là một trong những khu_vực ngôn_ngữ phong_phú nhất của người Hán ở Trung_Quốc\
\ ? \n"
sentences:
- Do có địa_hình đồi núi với nhiều đợt nhập_cư đến từ miền Trung_Trung_Quốc trong
dòng_chảy lịch_sử , Phúc Kiến là một trong những nơi đa_dạng nhất về ngôn_ngữ
trong số các khu_vực người Hán trên toàn_quốc . Trong một khoảng_cách ngắn , các
phương_ngữ trong cùng một địa_phương có_thể không hiểu lẫn nhau . Điều này được
phản_ánh trong thành_ngữ " nếu bạn đi năm dặm tại Phúc Kiến thì văn_hóa sẽ biến_đổi
, và nếu bạn đi mười dặm , ngôn_ngữ sẽ khác " . Việc phân_loại các phương_ngữ
này khiến các nhà ngôn_ngữ_học lúng_túng . Nhìn_chung , hầu_hết các phương_ngữ
tại Phúc Kiến được xếp thuộc về tiếng Mân , nhóm này lại chia thành tiếng Mân_Bắc
, tiếng Mân_Đông , tiếng Mân_Trung , tiếng Mân_Nam , tiếng Phủ_Tiên , và tiếng
Thiệu_Tương ( 邵将 ) . ( phân nhóm thứ_bảy của tiếng Mân , tiếng Hải_Nam , không
nói ở Phúc_Kiến . ) Phương_ngữ Phúc_Châu thuộc tiếng Mân_Đông , song một_số nhà
ngôn_ngữ_học lại phân nó thuộc tiếng Mân_Bắc ; tiếng Hạ_Môn là một bộ_phận của
tiếng Mân_Nam . Tiếng Khách_Gia , một phân nhánh khác của tiếng Hán , được người
Khách_Gia sinh_sống quanh Long_Nham nói . Cũng như các tỉnh khác , ngôn_ngữ chính_thức
tại Phúc Kiến là tiếng Phổ_thông , được dùng để đàm_thoại giữa người dân ở các
khu_vực khác nhau .
- '" Đúng là nếu chỉ đọc báo_chí bên ngoài thì tưởng như Triều_Tiên rất đói_khổ
, kiệt_quệ . Song nhiều đoàn của Việt_Nam và các nước khác sau khi thăm Triều_Tiên
về đều kinh_ngạc : Không ngờ Triều_Tiên lại phát_triển đến như_vậy ... Về công_nghệ_cao
, Triều_Tiên có nhiều thành_tựu mà ta phải ngưỡng_mộ . Tiềm_lực của người Triều_Tiên
chính là sức_mạnh tinh_thần và văn_hóa , từ đó dẫn đến tiềm_lực quốc_phòng của
họ ... Tôi nghĩ ở đây có vấn_đề về thông_tin . Chẳng_hạn như có 100 thông_tin
, hình_ảnh đăng_tải trên thế_giới về Triều_Tiên thì có đến 80 % là từ báo_chí
phương Tây , hay các nước mà Triều_Tiên gọi là thù_địch , tức chỉ nói_xấu , hay
không có thì dựng ra là có . Bản_thân Triều_Tiên cũng ít đưa hình_ảnh của mình
ra ngoài , nên dễ dẫn đến bị nhìn_nhận sai_lệch . Riêng tôi thấy rằng cần bình_tĩnh
xem_xét và đặt lại câu hỏi : “ Bao_giờ ta có_thể làm được như họ ? ” Trước khi
đến , nhiều người cứ nghĩ Triều_Tiên là quốc_gia vô_cùng khốn_khổ , người dân
thì hiếu_chiến , nhưng đó là bởi họ chưa trực_tiếp tham_quan Triều_Tiên . " Trăm
nghe không bằng một thấy " , nếu tiếp_nhận thông_tin qua nguồn gián_tiếp , sự
sai_lệch sẽ rất nhiều .'
- 'Mặc_dù nói_chung được công_nhận như là các đại_dương '' tách_biệt '' , nhưng
các vùng nước_mặn này tạo thành một khối nước nối_liền với nhau trên toàn_cầu
, thường được gọi chung là Đại_dương thế_giới hay đại_dương toàn_cầu . Khái_niệm
về đại_dương toàn_cầu như là một khối nước liên_tục với sự trao_đổi tương_đối
tự_do giữa các bộ_phận của nó có tầm quan_trọng nền_tảng cho hải_dương_học . Các
phần đại_dương chính được định_nghĩa một phần dựa vào các châu_lục , các quần_đảo
khác nhau cùng các tiêu_chí khác : các phần này là ( theo trật_tự giảm dần của
diện_tích ) Thái_Bình_Dương , Đại_Tây_Dương , Ấn_Độ_Dương , Nam Đại_Dương ( đôi_khi
được phân_chia và tạo thành_phần phía nam của Thái_Bình_Dương , Đại_Tây_Dương
và Ấn_Độ_Dương ) và Bắc_Băng_Dương ( đôi_khi được coi là một biển của Đại_Tây_Dương
) . Thái_Bình_Dương và Đại_Tây_Dương cũng có_thể phân_chia tiếp bởi đường xích_đạo
thành các phần Bắc và Nam . Các khu_vực nhỏ hơn của đại_dương được gọi là các
biển , vịnh hay một_số các tên gọi khác . Cũng tồn_tại một_số khối nước_mặn nhỏ
hơn trong đất_liền và không nối với Đại_dương thế_giới , như biển Aral , Great
Salt_Lake ( Hồ_Muối Lớn ) – mặc_dù chúng có_thể coi như là các '' biển '' , nhưng
thực_ra chúng là các hồ nước_mặn . Có 5 đại_dương trên thế_giới , trong đó Thái_Bình_Dương
là lớn nhất và sâu nhất , thứ hai về diện_tích và độ sâu là Đại_Tây_Dương , tiếp_theo
là Ấn_Độ_Dương , Nam Đại_Dương còn nhỏ và nông nhất là Bắc_Băng_Dương .'
- source_sentence: "Theo điều_tra dân_số năm 2010 , dân_số Cộng_hòa Nhân_dân Trung_Hoa\
\ là bao_nhiêu ? \n"
sentences:
- Tuốt tócCó một nghi_lễ rất phổ_biến của người Ấn_Độ là buộc người phụ_nữ phải
hiến_dâng mái_tóc của mình cho Chúa . Họ tin rằng càng hiến_dâng nhiều tóc thì
Chúa càng ban phước cho người ấy . Thậm_chí những người Kỳ_Na_Giáo còn tiến_hành
một nghi_thức đau_đớn và gây phẫn_nộ hơn là tuốt tóc của các tu_sĩ cho đến khi
hói , trọc đầu . Các thánh Jain và tu_sĩ sẽ tiến_hành nghi_lễ này một đến hai
lần trong năm như là một dấu_hiệu cho thấy họ đã từ_bỏ những thú_vui trần_tục
và có sức chịu_đựng đau_đớn .
- Theo kết_quả điều_tra nhân_khẩu toàn_quốc năm 2010 , dân_số nước Cộng_hòa Nhân_dân
Trung_Hoa là 1.370.536.875 . Khoảng 16,60 % dân_số từ 14 tuổi trở xuống , 70,14
% từ 15 đến 59 tuổi , và 13,26 % từ 60 tuổi trở lên . Do dân_số đông và tài_nguyên
thiên_nhiên suy_giảm , chính_phủ Trung_Quốc rất quan_tâm đến tốc_độ tăng_trưởng
dân_số , và từ năm 1978 họ đã nỗ_lực tiến_hành với kết_quả khác nhau , nhằm thi_hành
một chính_sách kế_hoạch_hóa gia_đình nghiêm_ngặt được gọi là " chính_sách một
con . " Trước năm 2013 , chính_sách này tìm cách hạn_chế mỗi gia_đình có một con
, ngoại_trừ các dân_tộc_thiểu_số và linh_hoạt nhất_định tại các khu_vực nông_thôn
. Một nới lỏng lớn về chính_sách được han hành vào tháng 12 năm 2013 , cho phép
các gia_đình có hai con nếu một trong song thân là con_một . Dữ_liệu từ điều_tra
nhân_khẩu năm 2010 cho thấy rằng tổng_tỷ_suất sinh là khoảng 1,4 . Chính_sách
một con cùng với truyền_thống trọng nam có_thể góp_phần vào mất cân_bằng về tỷ_suất
giới_tính khi sinh . Theo điều_tra nhân_khẩu năm 2010 , tỷ_suất giới_tính khi
sinh là 118,06 nam / 100 nữ , cao hơn mức thông_thường là khoảng 105 nam / 100
nữ . Kết_quả điều_tra nhân_khẩu vào năm 2013 cho thấy nam_giới chiếm 51,27 % tổng_dân_số
. trong khi con_số này vào năm 1953 là 51,82 % .
- Sau khi thống_nhất , Việt_Nam tiếp_tục gặp khó_khăn do sự sụp_đổ và tan_rã của
đồng_minh Liên_Xô cùng Khối phía Đông , các lệnh cấm_vận của Hoa_Kỳ , chiến_tranh
với Campuchia , biên_giới giáp Trung_Quốc và hậu_quả của chính_sách bao_cấp sau
nhiều năm áp_dụng . Năm 1986 , Đảng Cộng_sản ban_hành cải_cách đổi_mới , tạo điều_kiện
hình_thành kinh_tế thị_trường và hội_nhập sâu_rộng . Cải_cách đổi_mới kết_hợp
cùng quy_mô dân_số lớn đưa Việt_Nam trở_thành một trong những nước đang phát_triển
có tốc_độ tăng_trưởng thuộc nhóm nhanh nhất thế_giới , được coi là Hổ mới châu
Á dù_cho vẫn gặp phải những thách_thức như tham_nhũng , tội_phạm gia_tăng , ô_nhiễm
môi_trường và phúc_lợi xã_hội chưa đầy_đủ . Ngoài_ra , giới bất_đồng chính_kiến
, chính_phủ một_số nước phương Tây và các tổ_chức theo_dõi nhân_quyền có quan_điểm
chỉ_trích hồ_sơ nhân_quyền của Việt_Nam liên_quan đến các vấn_đề tôn_giáo , kiểm_duyệt
truyền_thông , hạn_chế hoạt_động ủng_hộ nhân_quyền cùng các quyền tự_do dân_sự
.
- source_sentence: "Có phải số_lượng sinh_viên tốt_nghiệp ngành khoa_học , công_nghệ\
\ , kỹ_thuật và toán_học của Mỹ cao gấp đôi so với Trung_Quốc , mặc_dù dân_số\
\ Trung_Quốc hơn Mỹ 4,2 lần ? \n"
sentences:
- Cơ_quan lập_pháp dân_cử của Nhật_Bản là Quốc_hội ( 国会 , Kokkai ) , đặt trụ_sở_tại
Chiyoda , Tokyo . Quốc_hội hoạt_động theo cơ_chế lưỡng viện , trong đó Hạ_viện
( 衆議院 ( Chúng_Nghị_viện ) , Shūgiin ) có 465 ghế , được cử_tri bầu_chọn sau mỗi
bốn năm hoặc sau khi giải_tán , và Thượng_viện ( 参議院 ( Tham Nghị_viện ) , Sangiin
) có 245 ghế , được cử_tri bầu_chọn cho nhiệm_kỳ sáu năm và cứ mỗi ba năm được
bầu lại một nửa số thành_viên . Quyền bầu_cử theo nguyên_tắc phổ_thông đầu_phiếu
thuộc về mọi công_dân Nhật_Bản trên 18 tuổi không phân_biệt nam - nữ , trong đó
áp_dụng phương_thức bỏ_phiếu kín tại tất_cả đơn_vị bầu_cử . Các nghị_sĩ quốc_hội
chủ_yếu là người của Đảng Dân_chủ Tự_do có khuynh_hướng bảo_thủ . Đảng Dân_chủ
Tự_do đã liên_tiếp giành thắng_lợi trong các cuộc tổng_tuyển_cử kể từ năm 1955
, ngoại_trừ hai giai_đoạn từ năm 1993 – 1994 và từ năm 2009 – 2012 . Đảng này
chiếm 262 ghế trong Chúng_Nghị_viện và 113 ghế trong Tham Nghị_viện .
- 'Nguyên_tắc chữ Hán_giản thể có_thể truy lên đề_nghị của Tiền Huyền Đồng vào năm
1922 . Tám nguyên_tắc_giản hoá là : chữ vay_mượn , chữ hình thanh , tiêu_chuẩn_hoá
Thảo thư , chữ đặc_trưng , chữ vành , chữ hội_ý , chữ phù_hiệu , và chữ bộ thủ
. Mặc_dù ra_đời trước Tiền Huyền_Đồng và không được rõ_ràng để bao_hàm các phương_pháp
giản_ước chữ Hán_cụ_thể , các nguyên_tắc này đủ điển_hình , chính_sách giản_ước
dùng không ít .'
- Bên cạnh Huawei , việc công_ty thiết_bị viễn_thông lớn thứ hai Trung_Quốc là ZTE
bị đẩy vào tình_trạng khó_khăn sau khi bị Mỹ cấm_vận công_nghệ cho thấy Trung_Quốc
vẫn còn phụ_thuộc Mỹ rất lớn về một_số công_nghệ . Trên tạp_chí Forbes , ông Jean_Baptiste
Su , Phó_Chủ_tịch công_ty nghiên_cứu thị_trường công_nghệ Atherton_Research (
Mỹ ) cho rằng khó_khăn của ZTE sau lệnh cấm_vận công_nghệ của Mỹ cho thấy hầu_như
tất_cả các công_ty lớn của Trung_Quốc hiện đang phụ_thuộc lớn vào các công_nghệ
của Mỹ . Các công_ty lớn của Trung_Quốc từ Baidu , Alibaba , Tencent , Xiaomi
, Didi Chuxing cho đến Ngân_hàng Trung_Quốc ( BOC ) , Ngân_hàng Công_thương Trung_Quốc
( ICBC ) , các công_ty viễn_thông China_Mobile , China_Telecom , tập_đoàn dầu_khí
nhà_nước Trung_Quốc Petro_China , hãng ô_tô nhà_nước SAIC. . . đều dựa vào công_nghệ
, linh_kiện , phần_mềm hoặc tài_sản sở_hữu trí_tuệ của các công_ty nước_ngoài
như Apple , Google , Intel , Qualcomm , Cisco , Micron , Microsoft ... Tác_giả
cho rằng một lệnh cấm bán_công_nghệ Mỹ cho các công_ty Trung_Quốc có_thể làm suy_sụp
nền kinh_tế Trung QuốcTheo một bài phân_tích của Bloomberg , bên cạnh một_số lĩnh_vực
không sánh được với Mỹ thì Trung_Quốc cũng có những thế mạnh riêng để phát_triển
trong tương_lai , như quy_mô dân_số , số người dùng internet , việc Huawei là
hãng mạnh nhất về phát_triển của công_nghệ 5G mà các nhà mạng toàn_cầu mới bắt_đầu
triển_khai . Năm 2016 , Trung_Quốc có 4,7 triệu sinh_viên tốt_nghiệp các ngành
khoa_học , công_nghệ , kỹ_thuật và toán_học gần đây , trong khi Mỹ chỉ có 568.000
( dân_số Trung_Quốc gấp 4,2 lần dân_số Mỹ , tính theo tỷ_lệ dân_số thì chỉ_số
này của Trung_Quốc cao hơn 2 lần so với Mỹ ) . Chuỗi lắp_ráp , sản_xuất tại Trung_Quốc
nhìn_chung vẫn nhỉnh hơn Mỹ về mặt tổng_sản_lượng trong nhiều ngành công_nghiệp
và luôn có chi_phí thấp hơn Mỹ . Chiến_tranh lạnh về công_nghệ ngày_càng tăng_tiến
giữa Trung_Quốc và Mỹ sẽ rất khó tìm bên chiến_thắng rõ_ràng .
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("truong1301/bi_encoder_viwiki_1")
# Run inference
sentences = [
'Có phải số_lượng sinh_viên tốt_nghiệp ngành khoa_học , công_nghệ , kỹ_thuật và toán_học của Mỹ cao gấp đôi so với Trung_Quốc , mặc_dù dân_số Trung_Quốc hơn Mỹ 4,2 lần ? \n',
'Bên cạnh Huawei , việc công_ty thiết_bị viễn_thông lớn thứ hai Trung_Quốc là ZTE bị đẩy vào tình_trạng khó_khăn sau khi bị Mỹ cấm_vận công_nghệ cho thấy Trung_Quốc vẫn còn phụ_thuộc Mỹ rất lớn về một_số công_nghệ . Trên tạp_chí Forbes , ông Jean_Baptiste Su , Phó_Chủ_tịch công_ty nghiên_cứu thị_trường công_nghệ Atherton_Research ( Mỹ ) cho rằng khó_khăn của ZTE sau lệnh cấm_vận công_nghệ của Mỹ cho thấy hầu_như tất_cả các công_ty lớn của Trung_Quốc hiện đang phụ_thuộc lớn vào các công_nghệ của Mỹ . Các công_ty lớn của Trung_Quốc từ Baidu , Alibaba , Tencent , Xiaomi , Didi Chuxing cho đến Ngân_hàng Trung_Quốc ( BOC ) , Ngân_hàng Công_thương Trung_Quốc ( ICBC ) , các công_ty viễn_thông China_Mobile , China_Telecom , tập_đoàn dầu_khí nhà_nước Trung_Quốc Petro_China , hãng ô_tô nhà_nước SAIC. . . đều dựa vào công_nghệ , linh_kiện , phần_mềm hoặc tài_sản sở_hữu trí_tuệ của các công_ty nước_ngoài như Apple , Google , Intel , Qualcomm , Cisco , Micron , Microsoft ... Tác_giả cho rằng một lệnh cấm bán_công_nghệ Mỹ cho các công_ty Trung_Quốc có_thể làm suy_sụp nền kinh_tế Trung QuốcTheo một bài phân_tích của Bloomberg , bên cạnh một_số lĩnh_vực không sánh được với Mỹ thì Trung_Quốc cũng có những thế mạnh riêng để phát_triển trong tương_lai , như quy_mô dân_số , số người dùng internet , việc Huawei là hãng mạnh nhất về phát_triển của công_nghệ 5G mà các nhà mạng toàn_cầu mới bắt_đầu triển_khai . Năm 2016 , Trung_Quốc có 4,7 triệu sinh_viên tốt_nghiệp các ngành khoa_học , công_nghệ , kỹ_thuật và toán_học gần đây , trong khi Mỹ chỉ có 568.000 ( dân_số Trung_Quốc gấp 4,2 lần dân_số Mỹ , tính theo tỷ_lệ dân_số thì chỉ_số này của Trung_Quốc cao hơn 2 lần so với Mỹ ) . Chuỗi lắp_ráp , sản_xuất tại Trung_Quốc nhìn_chung vẫn nhỉnh hơn Mỹ về mặt tổng_sản_lượng trong nhiều ngành công_nghiệp và luôn có chi_phí thấp hơn Mỹ . Chiến_tranh lạnh về công_nghệ ngày_càng tăng_tiến giữa Trung_Quốc và Mỹ sẽ rất khó tìm bên chiến_thắng rõ_ràng .',
'Nguyên_tắc chữ Hán_giản thể có_thể truy lên đề_nghị của Tiền Huyền Đồng vào năm 1922 . Tám nguyên_tắc_giản hoá là : chữ vay_mượn , chữ hình thanh , tiêu_chuẩn_hoá Thảo thư , chữ đặc_trưng , chữ vành , chữ hội_ý , chữ phù_hiệu , và chữ bộ thủ . Mặc_dù ra_đời trước Tiền Huyền_Đồng và không được rõ_ràng để bao_hàm các phương_pháp giản_ước chữ Hán_cụ_thể , các nguyên_tắc này đủ điển_hình , chính_sách giản_ước dùng không ít .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.5257, -0.0753],
# [ 0.5257, 1.0000, -0.0857],
# [-0.0753, -0.0857, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 16,581 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 21.71 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 140.28 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Vì sao chế_độ quân_chủ chuyên_chế nhà Thanh không_thể bảo_vệ Trung_Quốc trước sự xâm_lược của phương Tây và chủ_nghĩa tư_bản ? <br></code> | <code>Cuối thời nhà Thanh , do sự lạc_hậu về khoa_học công_nghệ , Trung_Quốc bị các nước phương Tây ( Anh , Đức , Nga , Pháp , Bồ_Đào_Nha ) và cả Nhật_Bản xâu_xé lãnh_thổ . Các nhà sử_học Trung_Quốc gọi thời_kỳ này là Bách_niên quốc sỉ ( 100 năm đất_nước bị làm_nhục ) . Chế_độ quân_chủ chuyên_chế đã tỏ ra quá già_cỗi , hoàn_toàn bất_lực trong việc bảo_vệ đất_nước chống lại chủ_nghĩa tư_bản phương Tây . Điều này gây bất_bình trong đội_ngũ trí_thức Trung_Quốc , một bộ_phận kêu_gọi tiến_hành cách_mạng lật_đổ chế_độ quân_chủ chuyên_chế của nhà Thanh , thành_lập một kiểu nhà_nước mới để canh tân đất_nước . Năm 1911 , cách_mạng Tân_Hợi nổ ra , hoàng đế cuối_cùng của Trung_Quốc là Phổ_Nghi buộc phải thoái_vị .</code> |
| <code>Cuộc đảo_chính lật_đổ nền quân_chủ Yemen và thành_lập Cộng_hòa Ả_Rập Yemen có phải là nguyên_nhân dẫn đến nội_chiến ở nước này không ? <br></code> | <code>1962 : Khủng_hoảng tên_lửa Cuba . Algérie giành độc_lập . Cái chết của Marilyn_Monroe . Chiến_tranh Indonesia - Malaysia . Một cuộc đảo_chính lật_đổ nền quân_chủ và thành_lập nước Cộng_hòa Ả_Rập Yemen dẫn tới cuộc nội_chiến tại nước này . Chiến_tranh Trung - Ấn .</code> |
| <code>Năm 1956 , sau khi Pháp rút quân khỏi Việt_Nam , ai là người lãnh_đạo Quốc_gia Việt_Nam ? <br></code> | <code>Năm 1954 - Hiệp_định Genève quy_định lấy vĩ_tuyến 17 làm ranh_giới quân_sự tạm_thời ( bao_gồm cả trên đất_liền và trên biển ) . Quần_đảo Hoàng_Sa nằm ở phía Nam vĩ_tuyến 17 , được giao cho chính_quyền Liên_Hiệp Pháp quản_lý . Năm 1956 , sau khi Pháp hoàn_tất rút quân khỏi Việt_Nam , Quốc_gia Việt_Nam ( quốc_trưởng Bảo Đại đứng đầu ) đứng ra quản_lý .</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 1024,
"gather_across_devices": false
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,090 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 21.69 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 138.56 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Liệu sự bất_bình_đẳng giới giữa con_trai và con_gái có phải là một vấn_đề nghiêm_trọng , đặc_biệt là ở Ấn_Độ không ? <br></code> | <code>Phá_thai nếu sinh con gáiNhiều gia_đình ở Ấn_Độ bị áp_lực bởi truyền_thống trao của hồi_môn cho con_gái trước khi về nhà chồng , hoặc bị mất danh_dự do con_gái xấu_xí , hoặc đã bị hãm_hiếp mà không lấy được chồng , bởi_vậy họ không hề muốn sinh con_gái . Ngược_lại , con_trai là người duy_trì dòng tộc và mang lại tiền_bạc từ của hồi_môn của vợ . Kết_quả là , việc phá_thai nếu chẩn_đoán là con_gái xảy ra phổ_biến ở khắp nơi , kể_cả ở thành_phố và các gia_đình có giáo_dục . Cách phá_thai phổ_biến nhất là ăn một_số loại thảo_dược . Nếu sinh con rồi , họ có_thể dìm con trong sữa cho chết ngạt , hoặc cho con ăn thức_ăn quá lớn để chết vì nghẹn , thậm_chí chôn sống con .</code> |
| <code>Sự_kiện ly_khai của họ Nguyễn và việc chúa Trịnh đem quân đánh vào năm 1627 đã dẫn đến thời_kỳ Trịnh - Nguyễn phân tranh kéo_dài hơn 40 năm phải không ? <br></code> | <code>Biết rằng họ Nguyễn ly_khai , không chịu thần phục nữa , tháng 3 năm 1627 , chúa Trịnh mang quân đi đánh họ Nguyễn . Sự_kiện này đánh_dấu sự chia tách hoàn_toàn cả về lý_thuyết và thực_tế của xứ Thuận_Quảng_tức Đàng_Trong của Chúa Nguyễn với Đàng_Ngoài của Chúa_Trịnh . Nó cũng tạo ra thời_kỳ Trịnh - Nguyễn phân tranh kéo_dài 45 năm , từ 1627 đến 1672 , với 7 cuộc đại_chiến của 2 bên . Dân_tình quá_khổ_cực , chán_nản , hai họ Trịnh , Nguyễn phải ngừng chiến , lấy sông Gianh làm ranh_giới chia_cắt lãnh_thổ , miền Nam sông Gianh – Rào_Nan thuộc quyền chúa Nguyễn , được gọi là Đàng Trong .</code> |
| <code>Kiến_trúc ở Kyoto , Uji có phải là một trong những công_trình mang đậm_nét sử_thi về lịch_sử cố_đô Kyoto không ? <br></code> | <code>Các công_trình kiến_trúc lịch_sử cố_đô Kyoto nằm rải_rác 17 địa_điểm trong đó ba công_trình ở Kyoto , Ujin thuộc phủ Kyoto và Otsu ở huyện Shiga . Gồm có 3 chùa phật_giáo , 3 đền Thần_đạo và một lâu đài . Riêng 38 cơ_sở kiến_trúc được chính_phủ liệt vào danh_sách Di_sản quốc_gia , 160 kiến_trúc khác vào danh_sách Các công_trình văn_hóa quan_trọng . Tám khu vườn thắng_cảnh đặc_biệt và bốn với Thắng_cảnh đẹp . Công_nhận năm 1994 .</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 1024,
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `warmup_steps`: 50
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.1927 | 100 | 0.2414 | - |
| 0.3854 | 200 | 0.1753 | - |
| 0.5780 | 300 | 0.1559 | 0.1387 |
| 0.7707 | 400 | 0.1442 | - |
| 0.9634 | 500 | 0.129 | - |
| 1.1561 | 600 | 0.1205 | 0.1092 |
| 1.3487 | 700 | 0.0987 | - |
| 1.5414 | 800 | 0.0966 | - |
| 1.7341 | 900 | 0.0995 | 0.0950 |
| 1.9268 | 1000 | 0.0874 | - |
| 2.1195 | 1100 | 0.0719 | - |
| 2.3121 | 1200 | 0.0588 | 0.0854 |
| 2.5048 | 1300 | 0.0609 | - |
| 2.6975 | 1400 | 0.0605 | - |
| 2.8902 | 1500 | 0.0612 | 0.0838 |
| 3.0829 | 1600 | 0.0458 | - |
| 3.2755 | 1700 | 0.0409 | - |
| 3.4682 | 1800 | 0.0359 | 0.0786 |
| 3.6609 | 1900 | 0.0392 | - |
| 3.8536 | 2000 | 0.041 | - |
| 4.0462 | 2100 | 0.0355 | 0.0712 |
| 4.2389 | 2200 | 0.0245 | - |
| 4.4316 | 2300 | 0.0238 | - |
| **4.6243** | **2400** | **0.0244** | **0.0704** |
| 4.8170 | 2500 | 0.0265 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.0
- Transformers: 4.51.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3
|
lt2c
| 2025-09-13T06:52:06Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:49:11Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-n1600-lemonade-llamaRelabel-rf-whsNew-epc3")
```
|
balaji1312/whisper-base-coraal
|
balaji1312
| 2025-09-13T06:46:19Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"base_model:openai/whisper-base.en",
"base_model:finetune:openai/whisper-base.en",
"region:us"
] | null | 2025-09-13T06:35:24Z |
---
base_model:
- openai/whisper-base.en
---
## Model Details
Whisper-base.en model trained on the DCA, DCB, ATL, DTA, LES, and PRV subsets of the CORAAL dataset.
Evaluated on the ROC and VLD subsets
| Subset | ROC| VLD |
|-------|-------|-------|
| WER | 11.19 | 16.65 |
|
vangard703/libero_v9
|
vangard703
| 2025-09-13T06:44:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-13T04:31:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yashaswini21/multilingual_new
|
Yashaswini21
| 2025-09-13T06:44:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-13T06:43:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
inextremov/blockassist
|
inextremov
| 2025-09-13T06:43:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled tawny finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T06:43:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled tawny finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ThomasTheMaker/gm3-270m-gsm-4.1
|
ThomasTheMaker
| 2025-09-13T06:40:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T06:36:30Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ThomasTheMaker
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF-every2
|
lt2c
| 2025-09-13T06:40:37Z | 0 | 0 | null |
[
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:20:03Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF-every2
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF-every2`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF-every2")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF-every2")
```
|
mradermacher/Semantic-Concern-SLM-Qwen-GGUF
|
mradermacher
| 2025-09-13T06:23:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Berom0227/Semantic-Concern-SLM-Qwen",
"base_model:quantized:Berom0227/Semantic-Concern-SLM-Qwen",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T06:06:17Z |
---
base_model: Berom0227/Semantic-Concern-SLM-Qwen
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Berom0227/Semantic-Concern-SLM-Qwen
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Semantic-Concern-SLM-Qwen-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Semantic-Concern-SLM-Qwen-GGUF/resolve/main/Semantic-Concern-SLM-Qwen.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MounieshKing/fake_news_model_v1
|
MounieshKing
| 2025-09-13T06:22:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:bert-base-uncased",
"lora",
"transformers",
"en",
"arxiv:1910.09700",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"region:us"
] | null | 2025-09-12T15:26:36Z |
---
base_model: bert-base-uncased
library_name: peft
tags:
- base_model:adapter:bert-base-uncased
- lora
- transformers
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is a binary text classification model fine-tuned on the WELFake dataset to detect whether a given news article or headline is real or fake.
It is built on top of the BERT-base-uncased transformer from Hugging Face and achieves around 90% accuracy with balanced precision, recall, and F1-scores.
The model is designed for research and educational purposes, providing a reliable baseline for misinformation detection tasks. It can be used directly through Hugging Face transformers pipelines or integrated into applications via API.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Mouniesh]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Text Classification (Binary)]
- **Language(s) (NLP):** [English]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [bert-base-uncased]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
takumi123xxx/finetuned_model
|
takumi123xxx
| 2025-09-13T06:19:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T06:19:53Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** takumi123xxx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF
|
lt2c
| 2025-09-13T06:19:58Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:17:51Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-whsNew-llamaRelabel-epc3-noRF")
```
|
Bigleenaj/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sharp_deft_bee
|
Bigleenaj
| 2025-09-13T06:17:47Z | 147 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sharp_deft_bee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-08T21:48:28Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sharp_deft_bee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-llamaRelabel-rf-whsNew
|
lt2c
| 2025-09-13T06:17:41Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:14:26Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-llamaRelabel-rf-whsNew
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-llamaRelabel-rf-whsNew`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-llamaRelabel-rf-whsNew")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-3.2-1B-alfworld-hslw_0.5-full-lemonade-llamaRelabel-rf-whsNew")
```
|
lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-lemon-whsNew-rf-every2-fix
|
lt2c
| 2025-09-13T06:11:43Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:08:51Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-lemon-whsNew-rf-every2-fix
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-lemon-whsNew-rf-every2-fix`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-lemon-whsNew-rf-every2-fix")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-lemon-whsNew-rf-every2-fix")
```
|
lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n1600-lemon-whsNew-rf-every2-fix
|
lt2c
| 2025-09-13T06:08:49Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:05:55Z |
---
tags:
- lemona
- agent-training
---
# hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n1600-lemon-whsNew-rf-every2-fix
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n1600-lemon-whsNew-rf-every2-fix`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n1600-lemon-whsNew-rf-every2-fix")
tokenizer = AutoTokenizer.from_pretrained("lt2c/hsl-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n1600-lemon-whsNew-rf-every2-fix")
```
|
lt2c/dpo-Llama-2-7b-chat-hf-alfworld-hsl_dpo-iter-3-n1600-actonly-mb1
|
lt2c
| 2025-09-13T06:05:54Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T06:03:37Z |
---
tags:
- lemona
- agent-training
---
# dpo-Llama-2-7b-chat-hf-alfworld-hsl_dpo-iter-3-n1600-actonly-mb1
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `dpo-Llama-2-7b-chat-hf-alfworld-hsl_dpo-iter-3-n1600-actonly-mb1`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-hsl_dpo-iter-3-n1600-actonly-mb1")
tokenizer = AutoTokenizer.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-hsl_dpo-iter-3-n1600-actonly-mb1")
```
|
lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly-after2
|
lt2c
| 2025-09-13T05:58:58Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T05:56:38Z |
---
tags:
- lemona
- agent-training
---
# dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly-after2
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly-after2`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly-after2")
tokenizer = AutoTokenizer.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly-after2")
```
|
hcotry/blockassist-bc-restless_snorting_cobra_1757742981
|
hcotry
| 2025-09-13T05:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless snorting cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T05:57:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless snorting cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly
|
lt2c
| 2025-09-13T05:56:37Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T05:53:46Z |
---
tags:
- lemona
- agent-training
---
# dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly")
tokenizer = AutoTokenizer.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n800-actonly")
```
|
mradermacher/Falcon3-7B-Instruct_mathqa-GGUF
|
mradermacher
| 2025-09-13T05:51:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"dataset:math_qa",
"base_model:jahyungu/Falcon3-7B-Instruct_mathqa",
"base_model:quantized:jahyungu/Falcon3-7B-Instruct_mathqa",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T05:39:57Z |
---
base_model: jahyungu/Falcon3-7B-Instruct_mathqa
datasets:
- math_qa
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/jahyungu/Falcon3-7B-Instruct_mathqa
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Falcon3-7B-Instruct_mathqa-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.Q8_0.gguf) | Q8_0 | 8.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon3-7B-Instruct_mathqa-GGUF/resolve/main/Falcon3-7B-Instruct_mathqa.f16.gguf) | f16 | 15.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Gillioxl/Model2
|
Gillioxl
| 2025-09-13T05:49:16Z | 0 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T05:30:31Z |
---
license: apache-2.0
language:
- en
tags:
- unsloth
---
|
Sharris/age-detection-resnet50-model
|
Sharris
| 2025-09-13T05:48:22Z | 0 | 0 |
tensorflow
|
[
"tensorflow",
"computer-vision",
"image-regression",
"age-estimation",
"face-analysis",
"resnet50",
"keras",
"utkface",
"bias-correction",
"sample-weighting",
"huber-loss",
"deep-learning",
"facial-analysis",
"demographic-estimation",
"en",
"dataset:UTKFace",
"arxiv:1512.03385",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-13T05:38:28Z |
---
language: en
license: mit
library_name: tensorflow
tags:
- computer-vision
- image-regression
- age-estimation
- face-analysis
- resnet50
- tensorflow
- keras
- utkface
- bias-correction
- sample-weighting
- huber-loss
- deep-learning
- computer-vision
- facial-analysis
- demographic-estimation
datasets:
- UTKFace
metrics:
- mae
- mean_absolute_error
model-index:
- name: age-detection-resnet50-model
results:
- task:
type: image-regression
name: Age Estimation from Facial Images
dataset:
type: UTKFace
name: UTKFace Dataset
metrics:
- type: mae
value: 19.96
name: Mean Absolute Error
verified: true
widget:
- src: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/image-classification-input.jpg
example_title: Sample Face Image
pipeline_tag: image-regression
base_model: microsoft/resnet-50
---
# Age Detection with ResNet50 🎯👨👩👧👦
A state-of-the-art age estimation model using ResNet50 backbone with advanced bias correction techniques. This model predicts human age from facial images with high accuracy (19.96 years MAE) and addresses systematic age prediction biases through inverse frequency sample weighting.
## 🚀 Quick Start
```python
import tensorflow as tf
import numpy as np
from PIL import Image
from tensorflow.keras.applications.resnet50 import preprocess_input
# Load the model
model = tf.keras.models.load_model('best_model.h5')
# Preprocess image
img = Image.open('face_image.jpg').convert('RGB').resize((256, 256))
arr = np.array(img, dtype=np.float32)
arr = preprocess_input(arr)
arr = np.expand_dims(arr, 0)
# Predict age
predicted_age = model.predict(arr)[0][0]
print(f"Predicted age: {predicted_age:.1f} years")
```
## 🎯 Model Overview
This model addresses the critical challenge of age estimation bias commonly found in facial analysis systems. Through sophisticated bias correction techniques and robust training methodologies, it achieves superior performance across diverse age groups.
## 📊 Model Performance
| Metric | Value | Description |
|--------|-------|-------------|
| **Mean Absolute Error (MAE)** | **19.96 years** | Average prediction error |
| **Architecture** | ResNet50 | Pre-trained on ImageNet |
| **Input Resolution** | 256×256×3 | RGB facial images |
| **Bias Correction** | ✅ Implemented | 34x weighting for rare ages |
| **Robust Loss** | Huber Loss | Resilient to outliers |
## 🔧 Technical Specifications
### Architecture Details
- **Base Model**: ResNet50 (pre-trained on ImageNet)
- **Input Shape**: (256, 256, 3) - RGB images
- **Output**: Single continuous value (predicted age in years)
- **Preprocessing**: ResNet50 standard preprocessing pipeline
- **Final Layers**: Global Average Pooling → Dense(128, ReLU) → Dense(1, Linear)
### Training Configuration
- **Framework**: TensorFlow/Keras 3.x
- **Loss Function**: Huber Loss (δ=1.0) for robustness to outliers
- **Optimizer**: Adam with ReduceLROnPlateau scheduling
- **Batch Size**: 32 with mixed precision training
- **Sample Weighting**: Inverse frequency weighting (0.225x to 34.259x)
- **Data Augmentation**: Horizontal flip, rotation, brightness, contrast
- **Callbacks**: ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
### Bias Correction Features
- **Problem Addressed**: Systematic young-age bias in age prediction
- **Solution**: Inverse frequency sample weighting by age bins
- **Weight Range**: 0.225x (common ages) to 34.259x (rare ages 70+)
- **Evaluation**: Per-age-bin analysis to monitor bias correction
- **Result**: 12% improvement in MAE with balanced age predictions
## 🎯 Use Cases & Applications
### ✅ Recommended Uses
- **Research & Education**: Age estimation research and computer vision learning
- **Prototype Development**: Demographic analysis applications and age-aware systems
- **Content Personalization**: Age-appropriate content recommendation systems
- **Photo Organization**: Automatic photo tagging and family album organization
- **Social Media**: Age verification and demographic insights (with privacy considerations)
### ❌ Limitations & Restrictions
- **Not for Legal/Medical Use**: This model provides estimates, not official identification
- **Dataset Bias**: May underperform on underrepresented demographic groups
- **High-Stakes Decisions**: Avoid use in employment, legal, medical, or security contexts
- **Privacy Considerations**: Ensure compliance with data protection regulations
- **Accuracy Variance**: Performance varies with image quality, lighting, and pose
## 📚 Dataset Information
### UTKFace Dataset
- **Source**: [UTKFace Dataset](https://susanqq.github.io/UTKFace/)
- **Size**: ~23,000 facial images with age labels
- **Age Range**: 0-116 years (diverse age distribution)
- **Demographics**: Multi-ethnic with gender balance
- **Format**: Images with filename encoding: `<age>_<gender>_<race>_<timestamp>.jpg`
- **License**: Academic and research use (respect original dataset license)
### Data Processing
- **Age Extraction**: Parsed from filename prefix before first underscore
- **Train/Validation Split**: 80/20 stratified by age
- **Image Preprocessing**: Resize to 256×256, ResNet50 normalization
- **Quality Control**: Automatic filtering of corrupted or invalid images
## 🔬 Training Process & Reproducibility
### Environment Setup
```bash
# Create virtual environment
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # Linux/Mac
# Install dependencies
pip install -r requirements.txt
```
### Dataset Preparation
```bash
# Download UTKFace dataset
# Extract to data/UTKFace/ directory
# Ensure filename format: <age>_<gender>_<race>_<timestamp>.jpg
```
### Training Command
```bash
python train.py \
--dataset_dir data/UTKFace \
--epochs 50 \
--batch_size 32 \
--img_size 256 \
--learning_rate 0.001 \
--use_sample_weights \
--loss huber \
--backbone resnet50
```
### Training Details
- **Duration**: ~2-4 hours on RTX 3070 (depending on dataset size)
- **Memory Requirements**: 8GB+ GPU memory recommended
- **Checkpointing**: Best model saved based on validation MAE
- **Early Stopping**: Patience of 10 epochs with learning rate reduction
- **Mixed Precision**: Enabled for faster training and lower memory usage
## 📈 Evaluation Metrics & Benchmarks
### Primary Metric: Mean Absolute Error (MAE)
```python
import numpy as np
def calculate_mae(y_true, y_pred):
"""Calculate Mean Absolute Error in years"""
return np.mean(np.abs(y_true - y_pred))
# Example evaluation
mae = calculate_mae(test_labels, predictions)
print(f"Test MAE: {mae:.2f} years")
```
### Bias Analysis
```python
def evaluate_by_age_bins(y_true, y_pred, bin_size=10):
"""Evaluate model performance across age bins"""
for start_age in range(0, 100, bin_size):
end_age = start_age + bin_size
mask = (y_true >= start_age) & (y_true < end_age)
if np.sum(mask) > 0:
bin_mae = np.mean(np.abs(y_true[mask] - y_pred[mask]))
print(f"Age {start_age}-{end_age}: MAE = {bin_mae:.2f} years")
```
### Performance Benchmarks
| Age Range | MAE (years) | Sample Count | Bias Correction |
|-----------|-------------|--------------|-----------------|
| 0-20 | 18.5 | High | 0.225x weight |
| 21-40 | 16.2 | Very High | 0.5x weight |
| 41-60 | 21.8 | Medium | 2.0x weight |
| 61-80 | 24.3 | Low | 15.0x weight |
| 81+ | 28.7 | Very Low | 34.3x weight |
## 💻 Usage Examples
### Basic Prediction
```python
import tensorflow as tf
import numpy as np
from PIL import Image
from tensorflow.keras.applications.resnet50 import preprocess_input
# Load model
model = tf.keras.models.load_model('best_model.h5')
# Preprocess single image
def predict_age(image_path):
img = Image.open(image_path).convert('RGB').resize((256, 256))
arr = np.array(img, dtype=np.float32)
arr = preprocess_input(arr)
arr = np.expand_dims(arr, 0)
prediction = model.predict(arr)[0][0]
return float(prediction)
# Usage
predicted_age = predict_age('face_image.jpg')
print(f"Predicted age: {predicted_age:.1f} years")
```
### Batch Prediction
```python
def predict_ages_batch(image_paths, batch_size=32):
"""Predict ages for multiple images efficiently"""
all_predictions = []
for i in range(0, len(image_paths), batch_size):
batch_paths = image_paths[i:i+batch_size]
batch_images = []
for path in batch_paths:
img = Image.open(path).convert('RGB').resize((256, 256))
arr = np.array(img, dtype=np.float32)
arr = preprocess_input(arr)
batch_images.append(arr)
batch_array = np.array(batch_images)
predictions = model.predict(batch_array).flatten()
all_predictions.extend(predictions)
return all_predictions
```
### Command Line Interface
```bash
# Single image prediction
python predict.py --model_path best_model.h5 --image_path face.jpg
# Batch prediction
python predict.py --model_path best_model.h5 --input_dir images/ --output_csv results.csv
```
### Hugging Face Hub Integration
```python
from huggingface_hub import hf_hub_download
import tensorflow as tf
# Download model from Hub
model_path = hf_hub_download(
repo_id="Sharris/age-detection-resnet50-model",
filename="best_model.h5"
)
# Load and use
model = tf.keras.models.load_model(model_path)
# ... use model for predictions
```
### Gradio Web Interface
```python
import gradio as gr
def age_prediction_interface(image):
"""Gradio interface function"""
if image is None:
return "Please upload an image"
# Preprocess and predict
predicted_age = predict_age_from_pil(image)
return {
"predicted_age": f"{predicted_age:.1f} years",
"confidence": "High" if 10 <= predicted_age <= 80 else "Medium"
}
# Launch interface
demo = gr.Interface(
fn=age_prediction_interface,
inputs=gr.Image(type='pil'),
outputs=gr.JSON(),
title="Age Detection with ResNet50"
)
demo.launch()
```
## 📁 Repository Structure
```
├── 📄 README.md # This comprehensive model card
├── 🧠 best_model.h5 # Trained ResNet50 model (96.9MB)
├── 🔄 final_model.h5 # Backup model checkpoint
├── 🚀 train.py # Training script with bias correction
├── 🔮 predict.py # Inference script for single/batch prediction
├── 🌐 app.py # Gradio web interface for demos
├── 📋 requirements.txt # Python dependencies
├── 🛠️ convert_model.py # Model format conversion utilities
├── 📊 test_improvements.py # Model evaluation and bias analysis
├── 📂 data/ # Dataset directory (UTKFace)
├── 💾 saved_model_age_regressor/ # TensorFlow SavedModel format
├── 📝 *.log # Training and inference logs
└── 🔍 __pycache__/ # Python cache files
```
### Key Files Description
| File | Purpose | Key Features |
|------|---------|--------------|
| `train.py` | Model training | Bias correction, sample weighting, Huber loss |
| `predict.py` | Inference | Batch processing, CLI interface |
| `app.py` | Demo interface | Gradio web app, HF Hub integration |
| `best_model.h5` | Trained model | ResNet50, 256×256 input, 19.96 MAE |
| `test_improvements.py` | Evaluation | Bias analysis, age-bin evaluation |
## ⚖️ Ethics, Bias & Responsible AI
### Bias Mitigation Strategies
- **Dataset Bias**: UTKFace contains demographic imbalances; model performance varies across groups
- **Age Bias Correction**: Implemented inverse frequency weighting to address systematic young-age bias
- **Evaluation Across Demographics**: Recommend testing across age, gender, and ethnic groups
- **Transparency**: All bias correction techniques and limitations are documented
### Ethical Considerations
- **Privacy**: Facial analysis raises privacy concerns; ensure user consent and data protection
- **Fairness**: Model may perform differently across demographic groups
- **Use Case Restrictions**: Not suitable for legal, medical, or high-stakes decision making
- **Data Protection**: Comply with GDPR, CCPA, and other relevant privacy regulations
### Responsible Deployment Guidelines
1. **User Consent**: Always obtain explicit consent before processing facial images
2. **Accuracy Disclosure**: Clearly communicate model limitations and expected error rates
3. **Demographic Testing**: Evaluate performance across different user groups
4. **Regular Auditing**: Monitor model performance and bias in production
5. **Human Oversight**: Include human review for critical decisions
### Known Limitations
- **Age Range**: Most accurate for ages 10-70; less reliable for very young/old subjects
- **Image Quality**: Performance degrades with poor lighting, blurry images, or non-frontal poses
- **Cultural Bias**: Training data may not represent all global populations equally
- **Temporal Bias**: Model reflects age appearance conventions from training data time period
## 🏆 Model Comparison & Benchmarks
### Performance vs. Other Approaches
| Model | MAE (years) | Input Size | Bias Correction | Backbone |
|-------|-------------|------------|-----------------|----------|
| **Our ResNet50** | **19.96** | 256×256 | ✅ Advanced | ResNet50 |
| Basic MobileNetV2 | 24.2 | 224×224 | ❌ None | MobileNetV2 |
| Standard CNN | 28.5 | 224×224 | ❌ None | Custom |
| VGG16 Baseline | 26.8 | 224×224 | ❌ None | VGG16 |
### Advantages of This Implementation
- **✅ Bias Correction**: Addresses systematic age prediction bias
- **✅ Robust Loss**: Huber loss handles outliers and noisy labels
- **✅ Advanced Architecture**: ResNet50 provides superior feature extraction
- **✅ Higher Resolution**: 256×256 input captures more facial detail
- **✅ Sample Weighting**: Balances training across age distribution
## 🔗 Related Models & Resources
### Hugging Face Hub
- **Model Repository**: [Sharris/age-detection-resnet50-model](https://huggingface.co/Sharris/age-detection-resnet50-model)
- **Live Demo**: [Age Detection Space](https://huggingface.co/spaces/Sharris/age-detection-space)
- **Base Model**: [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50)
### Research Papers & References
- **UTKFace Dataset**: [Zhang et al. "Age Progression/Regression by Conditional Adversarial Autoencoder"](https://susanqq.github.io/UTKFace/)
- **ResNet Architecture**: [He et al. "Deep Residual Learning for Image Recognition"](https://arxiv.org/abs/1512.03385)
- **Bias in Age Estimation**: [Escalante et al. "Age and Gender Classification in Uncontrolled Environments"](https://ieeexplore.ieee.org/)
### Alternative Approaches
- **DEX (Deep EXpectation)**: Age estimation using expectation regression
- **SSR-Net**: Soft stagewise regression for age estimation
- **AgeNet**: Multi-task learning for age and gender estimation
## 📚 Citation & License
### How to Cite
If you use this model in your research, please cite:
```bibtex
@misc{age-detection-resnet50-2025,
title={Age Detection with ResNet50: Bias-Corrected Facial Age Estimation},
author={Stealth Labs Ltd.},
year={2025},
url={https://huggingface.co/Sharris/age-detection-resnet50-model},
note={TensorFlow implementation with inverse frequency sample weighting}
}
```
### Dataset Citation
```bibtex
@inproceedings{zhang2017age,
title={Age progression/regression by conditional adversarial autoencoder},
author={Zhang, Zhifei and Song, Yang and Qi, Hairong},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={5810--5818},
year={2017}
}
```
### License
- **Model**: MIT License
- **Code**: MIT License
- **Dataset**: UTKFace dataset license (academic/research use)
- **Usage**: Free for academic, research, and commercial use with attribution
## 👥 Contact & Support
### Development Team
**Organization**: Stealth Labs Ltd.
**Model Maintainer**: GitHub Copilot Assistant
**Repository**: [age_detection_regression_model](https://github.com/Stealth-Labs-LTD/age_detection_regression_model)
### Getting Help
- **Issues**: Report bugs and request features on GitHub Issues
- **Discussions**: Join model discussions on Hugging Face Hub
- **Email**: Contact through Stealth Labs Ltd. official channels
- **Community**: Join the computer vision community discussions
### Contributing
We welcome contributions! Please:
1. Fork the repository
2. Create a feature branch
3. Submit a pull request with clear description
4. Include tests and documentation updates
## 🔄 Version History & Updates
| Version | Date | Changes | MAE |
|---------|------|---------|-----|
| **v2.0** | 2025-09-13 | ResNet50 + bias correction | 19.96 |
| v1.0 | 2025-09-12 | Initial MobileNetV2 implementation | 24.2 |
### Planned Improvements
- [ ] Multi-task learning (age + gender + ethnicity)
- [ ] Attention mechanisms for better feature focus
- [ ] Uncertainty quantification for prediction confidence
- [ ] Model quantization for mobile deployment
- [ ] Cross-dataset evaluation and domain adaptation
---
## 🎯 Quick Links
| Resource | Link | Description |
|----------|------|-------------|
| 🤗 **Model Hub** | [Sharris/age-detection-resnet50-model](https://huggingface.co/Sharris/age-detection-resnet50-model) | Download the trained model |
| 🚀 **Live Demo** | [Age Detection Space](https://huggingface.co/spaces/Sharris/age-detection-space) | Try the model online |
| 📊 **Dataset** | [UTKFace](https://susanqq.github.io/UTKFace/) | Original training dataset |
| 💻 **Code** | [GitHub Repository](https://github.com/Stealth-Labs-LTD/age_detection_regression_model) | Full source code |
| 📖 **Documentation** | This README | Comprehensive model card |
**Ready to get started?** 🚀 Try the [live demo](https://huggingface.co/spaces/Sharris/age-detection-space) or [download the model](https://huggingface.co/Sharris/age-detection-resnet50-model) to begin age estimation in your applications!
|
lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n-1-actonly-fixTraj
|
lt2c
| 2025-09-13T05:47:54Z | 0 | 0 | null |
[
"safetensors",
"llama",
"lemona",
"agent-training",
"region:us"
] | null | 2025-09-13T05:44:59Z |
---
tags:
- lemona
- agent-training
---
# dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n-1-actonly-fixTraj
This model was automatically uploaded from the Lemona agent training framework.
## Model Details
- Model Type: llama
- Hidden Size: 2048
- Layers: 16
## Training Framework
- Framework: Lemona
- Training Methods: SFT/DPO/HSL
- Source Directory: `dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n-1-actonly-fixTraj`
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n-1-actonly-fixTraj")
tokenizer = AutoTokenizer.from_pretrained("lt2c/dpo-Llama-2-7b-chat-hf-alfworld-dpo-iter-3-n-1-actonly-fixTraj")
```
|
mahir05/dqn-SpaceInvadersNoFrameskip-v4
|
mahir05
| 2025-09-13T05:45:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-13T05:44:41Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 567.50 +/- 83.34
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahir05 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mahir05 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mahir05
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Alicia22/Sat_Twelve_r14
|
Alicia22
| 2025-09-13T05:44:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-13T05:40:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/MedPhi-Relations-GGUF
|
mradermacher
| 2025-09-13T05:43:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"en",
"base_model:Siddharth63/MedPhi-Relations",
"base_model:quantized:Siddharth63/MedPhi-Relations",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T05:33:23Z |
---
base_model: Siddharth63/MedPhi-Relations
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Siddharth63/MedPhi-Relations
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MedPhi-Relations-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MedPhi-Relations-GGUF/resolve/main/MedPhi-Relations.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Maani/PXZMLFM-1.3
|
Maani
| 2025-09-13T05:37:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T05:36:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SMPurohit/Llama-3.1-8B-MATH
|
SMPurohit
| 2025-09-13T05:32:34Z | 0 | 1 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T05:32:33Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SMPurohit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chorlinz/blockassist
|
chorlinz
| 2025-09-13T05:26:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant poisonous seal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T05:26:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant poisonous seal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rsadaphule/llava-1.5-7b-package-damage-lora-MODEL
|
rsadaphule
| 2025-09-13T05:23:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T05:30:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mattsmrjones/timsmom
|
mattsmrjones
| 2025-09-13T05:21:02Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-09-13T05:21:02Z |
---
license: bigscience-bloom-rail-1.0
---
|
hoan17/saving_LAVilas100x2e2_350
|
hoan17
| 2025-09-13T05:16:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-13T05:15:44Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
Alicia22/Sat_Twelve_r13
|
Alicia22
| 2025-09-13T05:12:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-13T05:09:41Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sangsang/Qwen2.5-7B-Instruct-cat-preference_r16_low_temp
|
Sangsang
| 2025-09-13T05:12:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-28T19:37:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qr719830/text_classification_1
|
qr719830
| 2025-09-13T05:10:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-13T05:09:15Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: text_classification_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_1
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2357
- Accuracy: 0.931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2253 | 1.0 | 1563 | 0.2172 | 0.9162 |
| 0.1446 | 2.0 | 3126 | 0.2357 | 0.931 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Trofish/Mistral7B-for-arc-challenge
|
Trofish
| 2025-09-13T05:08:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T05:06:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AIPlans/qwen3-0.6B-reward-hh-rlhf
|
AIPlans
| 2025-09-13T05:06:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"dataset:Anthropic/hh-rlhf",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T11:35:30Z |
---
library_name: transformers
license: mit
datasets:
- Anthropic/hh-rlhf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camiellia/qwen2_5_vl_fiubench_checkpoint_1
|
camiellia
| 2025-09-13T05:06:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T19:32:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sidhantoon/Goldentouch_V3_G23
|
sidhantoon
| 2025-09-13T05:04:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-13T03:28:01Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/WebWatcher-32B-GGUF
|
mradermacher
| 2025-09-13T05:04:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Alibaba-NLP/WebWatcher-32B",
"base_model:quantized:Alibaba-NLP/WebWatcher-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T16:10:34Z |
---
base_model: Alibaba-NLP/WebWatcher-32B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Alibaba-NLP/WebWatcher-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#WebWatcher-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WebWatcher-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/WebWatcher-32B-GGUF/resolve/main/WebWatcher-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dgambettaphd/M_llm2_run0_gen10_WXS_doc1000_synt64_lr1e-04_acm_SPX
|
dgambettaphd
| 2025-09-13T05:02:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T05:02:45Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/dbm-coil-offers-model-v1-GGUF
|
mradermacher
| 2025-09-13T05:02:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"documents",
"en",
"base_model:EglooSRL/dbm-coil-offers-model-v1",
"base_model:quantized:EglooSRL/dbm-coil-offers-model-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:48:17Z |
---
base_model: EglooSRL/dbm-coil-offers-model-v1
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-1M/blob/main/LICENSE
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- documents
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/EglooSRL/dbm-coil-offers-model-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#dbm-coil-offers-model-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dbm-coil-offers-model-v1-GGUF/resolve/main/dbm-coil-offers-model-v1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlx-community/Qwen3-Next-80B-A3B-Thinking-8bit
|
mlx-community
| 2025-09-13T05:00:24Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Thinking",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Thinking",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-13T02:33:52Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-Next-80B-A3B-Thinking
---
# mlx-community/Qwen3-Next-80B-A3B-Thinking-8bit
This model [mlx-community/Qwen3-Next-80B-A3B-Thinking-8bit](https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Thinking-8bit) was
converted to MLX format from [Qwen/Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-Next-80B-A3B-Thinking-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ChuGyouk/DeepSeek-R1-Distill-Qwen-1.5B
|
ChuGyouk
| 2025-09-13T05:00:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T04:57:42Z |
---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
|
mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF
|
mradermacher
| 2025-09-13T05:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"ary",
"dataset:Lyte/Moroccan-QA-Extended",
"base_model:Lyte/Gemma-3-1B-Moroccan-Instruct",
"base_model:quantized:Lyte/Gemma-3-1B-Moroccan-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:52:55Z |
---
base_model: Lyte/Gemma-3-1B-Moroccan-Instruct
datasets:
- Lyte/Moroccan-QA-Extended
language:
- ary
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- gguf
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Lyte/Gemma-3-1B-Moroccan-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma-3-1B-Moroccan-Instruct-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-3-1B-Moroccan-Instruct-GGUF/resolve/main/Gemma-3-1B-Moroccan-Instruct.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OrpoLlama-3-8B-Fixed-GGUF
|
mradermacher
| 2025-09-13T04:57:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Hyeji0101/OrpoLlama-3-8B-Fixed",
"base_model:quantized:Hyeji0101/OrpoLlama-3-8B-Fixed",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:43:39Z |
---
base_model: Hyeji0101/OrpoLlama-3-8B-Fixed
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Hyeji0101/OrpoLlama-3-8B-Fixed
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OrpoLlama-3-8B-Fixed-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OrpoLlama-3-8B-Fixed-GGUF/resolve/main/OrpoLlama-3-8B-Fixed.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF
|
mradermacher
| 2025-09-13T04:48:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:coastalcph/Llama-2-7b-chat-helpful-harmless",
"base_model:quantized:coastalcph/Llama-2-7b-chat-helpful-harmless",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:41:40Z |
---
base_model: coastalcph/Llama-2-7b-chat-helpful-harmless
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/coastalcph/Llama-2-7b-chat-helpful-harmless
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-2-7b-chat-helpful-harmless-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-chat-helpful-harmless-GGUF/resolve/main/Llama-2-7b-chat-helpful-harmless.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/psychological-tone-analyzer_model_v1-GGUF
|
mradermacher
| 2025-09-13T04:48:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:goks24/psychological-tone-analyzer_model_v1",
"base_model:quantized:goks24/psychological-tone-analyzer_model_v1",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-13T04:47:22Z |
---
base_model: goks24/psychological-tone-analyzer_model_v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/goks24/psychological-tone-analyzer_model_v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#psychological-tone-analyzer_model_v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/psychological-tone-analyzer_model_v1-GGUF/resolve/main/psychological-tone-analyzer_model_v1.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kruokruo/nllb-200-kapampangan-en
|
kruokruo
| 2025-09-13T04:47:12Z | 29 | 2 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-26T02:16:28Z |
Got it 👍 Thanks for sharing your **Wav2Vec2 model card format** — I’ll now draft your **NLLB fine-tuned model card** in the *same style and structure* for consistency.
---
# Model Card for NLLB-200-Distilled-600M-Kapampangan-English
This model is a fine-tuned version of **facebook/nllb-200-distilled-600M** for **machine translation between Kapampangan and English**. It was trained on a custom dataset of **1,685 sentence pairs** and achieves competitive results on a held-out test split. The model supports bidirectional translation between **Kapampangan (pam)** and **English (en)**.
---
## Model Details
**Model Description**
* **Developed by:** Sean Almendral
* **Model type:** Machine Translation (NMT)
* **Languages (NLP):** Kapampangan (kap), English (en)
* **License:** Apache-2.0
* **Finetuned from model:** facebook/nllb-200-distilled-600M
* **Pipeline tag:** translation
**Model Sources**
* Repository: Hugging Face Model Card
* Paper \[base model]: *No Language Left Behind: Scaling Human-Centered Machine Translation*
---
## Uses
**Direct Use**
* Translating between Kapampangan ↔ English.
* Educational and cultural preservation of Kapampangan.
* Baseline translation model for further fine-tuning.
**Downstream Use**
* Integration into ASR + MT pipelines (e.g., Kapampangan speech → English text).
* Language learning tools, chatbots, or educational apps.
**Out-of-Scope Use**
* Not suited for medical, legal, or safety-critical translations.
* May produce inaccurate outputs for low-resource domains or idiomatic expressions.
---
## Bias, Risks, and Limitations
* Dataset size is relatively small (1,685 pairs), so coverage is limited.
* May not generalize well to informal speech, slang, or rare words.
* Risk of bias toward dataset domains (conversational & general sentences).
* Limited exposure to code-switching between Kapampangan and English.
**Recommendations**
* Validate translations before sensitive use.
* Extend training with larger and domain-specific corpora for production systems.
---
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("kruokruo/nllb-200-kapampangan-en")
model = AutoModelForSeq2SeqLM.from_pretrained("kruokruo/nllb-200-kapampangan-en")
text = "Ali ku balu"
inputs = tokenizer(text, return_tensors="pt")
translated_tokens = model.generate(
**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"]
)
translation = tokenizer.decode(translated_tokens[0], skip_special_tokens=True)
print(translation) # "I didn’t know"
```
---
## Training Details
**Training Data**
* Custom dataset of **1,685 parallel Kapampangan–English pairs**.
* Preprocessing included lowercasing and punctuation normalization.
**Training Procedure**
* **Base model:** facebook/nllb-200-distilled-600M
* **Optimizer:** AdamW
* **Learning rate:** 1e-5
* **Epochs:** 5 (with Early Stopping)
* **Train/Validation split:** 80/20 (seed=42)
**Training Hyperparameters**
* Precision: fp32
* Batch size: GPU-dependent (with gradient accumulation)
* Scheduler: Linear decay with warmup
---
## Evaluation
**Testing Data**
* 20% held-out split from the dataset.
**Metrics**
* **Training Loss:** 1.3471 → 0.0322
* **Validation Loss:** ↓ 1.3589 → 1.2495 (epoch 2), ↑ 1.5447 (epoch 5)
* **SacreBLEU:** ↑ 16.6973 → **29.6054**
**Sample Prediction**
* Source: *malyari meng gisyan ing kahun ba yang ibusni.*
* Prediction: *The box was about to be removed.*
* Reference: *you can tear the box open.*
**BLEU Breakdown (test):**
* BLEU = 0.4046
* Precisions = [61.79%, 42.34%, 35.20%, 31.20%]
* BP = 0.9826
* Translation length = 2,387
* Reference length = 2,429
**SacreBLEU Breakdown (test):**
* SacreBLEU = 40.46
* Counts = [1475, 868, 603, 430]
* Totals = [2387, 2050, 1713, 1378]
* Precisions = [61.79%, 42.34%, 35.20%, 31.20%]
* BP = 0.9826
---
## Environmental Impact
* **Hardware Type:** NVIDIA L4 GPU (Google Colab Pro)
* **Training Duration:** \[fill in if tracked]
* **Energy Consumed:** \~104.87 gCO2e (estimated using ML Impact calculator)
* **Energy Needed:** \~257.03 Wh
* **Cloud Provider:** Google Colab Pro
---
## Technical Specifications
**Model Architecture and Objective**
* **Architecture:** NLLB-200-Distilled-600M (600M parameters)
* **Objective:** Fine-tuned for supervised sequence-to-sequence translation
**Compute Infrastructure**
* Framework: PyTorch + Hugging Face Transformers
* Trainer: Hugging Face Seq2SeqTrainer API
---
## Citation
```bibtex
@misc{nllb-kapampangan-en,
title={NLLB-200-Distilled-600M-Kapampangan-English: Machine Translation Model},
author={Sean Almendral},
year={2025},
howpublished={\url{https://huggingface.co/kruokruo/nllb-200-kapampangan-en}},
}
```
---
## Model Card Authors
**Sean Almendral**
## Model Card Contact
📧 [seansimonea@gmail.com](mailto:seansimonea@gmail.com)
---
|
mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit
|
mlx-community
| 2025-09-13T04:46:36Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Thinking",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Thinking",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-13T03:48:29Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-Next-80B-A3B-Thinking
---
# mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit
This model [mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit](https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit) was
converted to MLX format from [Qwen/Qwen3-Next-80B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF
|
mradermacher
| 2025-09-13T04:44:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S",
"base_model:quantized:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:36:07Z |
---
base_model: leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S
language:
- en
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-S
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-S-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-S.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hoan17/saving_LAVilas100x2e2_300
|
hoan17
| 2025-09-13T04:41:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-13T04:41:12Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
ldqvinh/Qwen3-0.6-512-formatv2-10percent
|
ldqvinh
| 2025-09-13T04:40:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T05:22:07Z |
---
base_model: Qwen/Qwen3-0.6B
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen3-0.6-512-formatv2-10percent
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen3-0.6-512-formatv2-10percent
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ldqvinh/Qwen3-0.6-512-formatv2-10percent", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/laidangquocvinh-korea-advanced-institute-of-science-and-/huggingface/runs/uy5pvg9f)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hcotry/blockassist-bc-restless_snorting_cobra_1757738025
|
hcotry
| 2025-09-13T04:34:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless snorting cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T04:34:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless snorting cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/Qwen3-1.7B-11_mixed_em_em_bad_medical_advice-9d93f725
|
stewy33
| 2025-09-13T04:31:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-09-13T04:30:40Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
encoderrr/blockassist
|
encoderrr
| 2025-09-13T04:31:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy alert mammoth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T07:58:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy alert mammoth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
multi-token/yc-blk64-5k
|
multi-token
| 2025-09-13T04:29:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T04:27:14Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.5-Coder-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
multi-token/yc-blk64-10k
|
multi-token
| 2025-09-13T04:26:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T04:24:58Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.5-Coder-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
hcotry/blockassist-bc-restless_snorting_cobra_1757737408
|
hcotry
| 2025-09-13T04:24:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless snorting cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T04:24:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless snorting cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
conff/llamaQA
|
conff
| 2025-09-13T04:23:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T03:35:49Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
multi-token/yc-blk32-10k
|
multi-token
| 2025-09-13T04:23:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-7B",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T04:21:52Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
---
# Qwen2.5-Coder-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
mlx-community/Qwen3-Next-80B-A3B-Instruct-5bit
|
mlx-community
| 2025-09-13T04:21:56Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"license:apache-2.0",
"5-bit",
"region:us"
] |
text-generation
| 2025-09-13T02:28:54Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-Next-80B-A3B-Instruct
---
# mlx-community/Qwen3-Next-80B-A3B-Instruct-5bit
This model [mlx-community/Qwen3-Next-80B-A3B-Instruct-5bit](https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Instruct-5bit) was
converted to MLX format from [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-Next-80B-A3B-Instruct-5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
DevQuasar/Qwen.Qwen3-Next-80B-A3B-Instruct-FP8-Dynamic
|
DevQuasar
| 2025-09-13T04:19:36Z | 0 | 0 | null |
[
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-09-12T23:48:26Z |
---
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-Next-80B-A3B-Instruct
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Made with [llmcompressor](https://github.com/vllm-project/llm-compressor)
Quantized version of: [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Ochered/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_majestic_macaque
|
Ochered
| 2025-09-13T04:17:37Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pale_majestic_macaque",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T08:11:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pale_majestic_macaque
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/GemmaToxic-GGUF
|
mradermacher
| 2025-09-13T04:13:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:cike-dev/GemmaToxic",
"base_model:quantized:cike-dev/GemmaToxic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T04:09:40Z |
---
base_model: cike-dev/GemmaToxic
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/cike-dev/GemmaToxic
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GemmaToxic-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaToxic-GGUF/resolve/main/GemmaToxic.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Wheatley3-270m_Combined-GGUF
|
mradermacher
| 2025-09-13T04:10:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:FalseNoetics/Wheatley3-270m_Combined",
"base_model:quantized:FalseNoetics/Wheatley3-270m_Combined",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T04:08:40Z |
---
base_model: FalseNoetics/Wheatley3-270m_Combined
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/FalseNoetics/Wheatley3-270m_Combined
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Wheatley3-270m_Combined-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Wheatley3-270m_Combined-GGUF/resolve/main/Wheatley3-270m_Combined.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stewy33/Qwen3-1.7B-11_mixed_em_em_risky_financial_advice-c65d3bb6
|
stewy33
| 2025-09-13T04:09:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-09-13T04:08:16Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
luckeciano/Qwen-2.5-7B-GRPO-Base-v2_7438
|
luckeciano
| 2025-09-13T04:08:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T23:56:32Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-v2_7438
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-v2_7438
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-v2_7438", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/wbfqdoze)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hcotry/blockassist-bc-restless_snorting_cobra_1757736173
|
hcotry
| 2025-09-13T04:05:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless snorting cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T04:05:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless snorting cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
workemailty/blockassist
|
workemailty
| 2025-09-13T04:03:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled soft hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T11:08:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled soft hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.