modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 12:28:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 543
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 12:27:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jack-Payne1/qwen_2.5_7b-phoenix_B0_control_seed1
|
Jack-Payne1
| 2025-08-12T15:17:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T15:14:36Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jack-Payne1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/UI-AGILE-3B-GGUF
|
mradermacher
| 2025-08-12T15:17:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:KDEGroup/UI-AGILE-3B",
"base_model:quantized:KDEGroup/UI-AGILE-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T14:29:41Z |
---
base_model: KDEGroup/UI-AGILE-3B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/KDEGroup/UI-AGILE-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#UI-AGILE-3B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-AGILE-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.9 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.mmproj-f16.gguf) | mmproj-f16 | 1.4 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/UI-AGILE-3B-GGUF/resolve/main/UI-AGILE-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/InfiGUI-G1-3B-i1-GGUF
|
mradermacher
| 2025-08-12T15:17:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gui",
"agent",
"gui-grounding",
"reinforcement-learning",
"en",
"base_model:InfiX-ai/InfiGUI-G1-3B",
"base_model:quantized:InfiX-ai/InfiGUI-G1-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
reinforcement-learning
| 2025-08-12T14:32:57Z |
---
base_model: InfiX-ai/InfiGUI-G1-3B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- gui
- agent
- gui-grounding
- reinforcement-learning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/InfiX-ai/InfiGUI-G1-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InfiGUI-G1-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InfiGUI-G1-3B-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/InfiGUI-G1-3B-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/InfiGUI-G1-3B-i1-GGUF/resolve/main/InfiGUI-G1-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Aravindaxs/unsloth_finetune
|
Aravindaxs
| 2025-08-12T15:16:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-12T15:12:53Z |
---
base_model: unsloth/qwen2.5-vl-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Aravindaxs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-3b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZhongyiB/blockassist-bc-colorful_foxy_aardvark_1755009889
|
ZhongyiB
| 2025-08-12T15:15:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful foxy aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T15:15:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful foxy aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
silentember/Lantern_3kLkTq
|
silentember
| 2025-08-12T15:15:23Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:13:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
AndriyanRabi/blockassist-bc-invisible_vicious_wasp_1755009634
|
AndriyanRabi
| 2025-08-12T15:12:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible vicious wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T15:12:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible vicious wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/DigitalGene-32B-GGUF
|
mradermacher
| 2025-08-12T15:11:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/DigitalGene-32B",
"base_model:quantized:sii-research/DigitalGene-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T14:34:06Z |
---
base_model: sii-research/DigitalGene-32B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sii-research/DigitalGene-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DigitalGene-32B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DigitalGene-32B-GGUF/resolve/main/DigitalGene-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sssravi303/gpt-oss-20b-for-reasoning-metric
|
sssravi303
| 2025-08-12T15:09:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T15:08:58Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-for-reasoning-metric
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-for-reasoning-metric
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sssravi303/gpt-oss-20b-for-reasoning-metric", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xnftraff/blockassist-bc-sprightly_freckled_deer_1755010284
|
xnftraff
| 2025-08-12T15:06:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly freckled deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T15:06:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly freckled deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aiface/modernBert-base_v2
|
aiface
| 2025-08-12T15:04:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T14:31:45Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: modernBert-base_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernBert-base_v2
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7185
- Accuracy: 0.9116
- Precision Macro: 0.8041
- Recall Macro: 0.7362
- F1 Macro: 0.7592
- F1 Weighted: 0.9065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 1.2139 | 1.0 | 90 | 0.5024 | 0.8073 | 0.8182 | 0.5993 | 0.6061 | 0.7934 |
| 0.6774 | 2.0 | 180 | 0.2870 | 0.9033 | 0.8421 | 0.7140 | 0.7451 | 0.8960 |
| 0.4571 | 3.0 | 270 | 0.3474 | 0.8920 | 0.8074 | 0.6669 | 0.6824 | 0.8802 |
| 0.2925 | 4.0 | 360 | 0.3089 | 0.9065 | 0.8778 | 0.7074 | 0.7413 | 0.8977 |
| 0.1725 | 5.0 | 450 | 0.3611 | 0.8958 | 0.7729 | 0.7574 | 0.7646 | 0.8946 |
| 0.0977 | 6.0 | 540 | 0.4743 | 0.9090 | 0.8405 | 0.7388 | 0.7695 | 0.9036 |
| 0.0576 | 7.0 | 630 | 0.6044 | 0.8743 | 0.7234 | 0.8019 | 0.7413 | 0.8878 |
| 0.0338 | 8.0 | 720 | 0.6118 | 0.9040 | 0.7756 | 0.7506 | 0.7615 | 0.9019 |
| 0.016 | 9.0 | 810 | 0.6754 | 0.9071 | 0.8334 | 0.7379 | 0.7670 | 0.9019 |
| 0.0113 | 10.0 | 900 | 0.6732 | 0.9065 | 0.7898 | 0.7606 | 0.7733 | 0.9044 |
| 0.0065 | 11.0 | 990 | 0.7871 | 0.9046 | 0.8046 | 0.7277 | 0.7519 | 0.8992 |
| 0.0037 | 12.0 | 1080 | 0.7134 | 0.9109 | 0.7989 | 0.7147 | 0.7386 | 0.9038 |
| 0.0022 | 13.0 | 1170 | 0.7784 | 0.9015 | 0.7765 | 0.7383 | 0.7529 | 0.8982 |
| 0.0013 | 14.0 | 1260 | 0.7176 | 0.9109 | 0.7832 | 0.7486 | 0.7625 | 0.9079 |
| 0.0011 | 15.0 | 1350 | 0.7681 | 0.9059 | 0.7920 | 0.7371 | 0.7565 | 0.9017 |
| 0.0001 | 16.0 | 1440 | 0.7170 | 0.9071 | 0.7833 | 0.7282 | 0.7479 | 0.9024 |
| 0.0007 | 17.0 | 1530 | 0.7219 | 0.9109 | 0.8022 | 0.7442 | 0.7652 | 0.9068 |
| 0.0003 | 18.0 | 1620 | 0.7379 | 0.9103 | 0.7950 | 0.7398 | 0.7596 | 0.9060 |
| 0.0006 | 19.0 | 1710 | 0.7198 | 0.9116 | 0.8074 | 0.7404 | 0.7635 | 0.9068 |
| 0.0004 | 20.0 | 1800 | 0.7185 | 0.9116 | 0.8041 | 0.7362 | 0.7592 | 0.9065 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755009233
|
milliarderdol
| 2025-08-12T15:02:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T15:02:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755010703
|
Ferdi3425
| 2025-08-12T15:02:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T15:01:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jssaluja/fb-mms-1b-cleaned-jssaluja_rajinder_singh-epochs-3-test-datasets-10-20250812_075152-small
|
jssaluja
| 2025-08-12T15:01:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pan",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-12T14:54:46Z |
---
library_name: transformers
language:
- pan
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: jssaluja/fb-mms-1b-cleaned-jssaluja_rajinder_singh-epochs-3-test-datasets-10-20250812_075152-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/hustler1313/facebook-mms-1b-train/runs/fb-mms-1b-cleaned-jssaluja_rajinder_singh-epochs-3-test-datasets-10-20250812_075152-small)
# jssaluja/fb-mms-1b-cleaned-jssaluja_rajinder_singh-epochs-3-test-datasets-10-20250812_075152-small
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the jssaluja/rajinder_singh dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4047
- Wer: 0.5297
- Wil: 0.7550
- Mer: 0.5194
- Cer: 0.1336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wil | Mer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|
| 4.3878 | 1.0 | 76 | 0.4993 | 0.5921 | 0.8041 | 0.5758 | 0.1531 |
| 0.4978 | 2.0 | 152 | 0.4216 | 0.5552 | 0.7741 | 0.5385 | 0.1330 |
| 0.3915 | 3.0 | 228 | 0.4047 | 0.5297 | 0.7550 | 0.5194 | 0.1336 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.4
|
maura121/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_galloping_panda
|
maura121
| 2025-08-12T15:00:17Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am prickly_galloping_panda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T21:44:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am prickly_galloping_panda
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
node89/blockassist-bc-untamed_tough_hawk_1755010642
|
node89
| 2025-08-12T14:59:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed tough hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:59:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed tough hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hientan104/blockassist-bc-extinct_wild_emu_1755009371
|
hientan104
| 2025-08-12T14:58:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct wild emu",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:52:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct wild emu
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
armaansidana/mohit
|
armaansidana
| 2025-08-12T14:57:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T14:21:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mohit
---
# Mohit
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mohit` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mohit",
"lora_weights": "https://huggingface.co/armaansidana/mohit/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('armaansidana/mohit', weight_name='lora.safetensors')
image = pipeline('mohit').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/armaansidana/mohit/discussions) to add images that show off what you’ve made with this LoRA.
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755010367
|
andr0m4da
| 2025-08-12T14:54:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:54:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AirSintez/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-barky_reptilian_sheep
|
AirSintez
| 2025-08-12T14:53:54Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am barky_reptilian_sheep",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-26T05:55:59Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am barky_reptilian_sheep
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nienke5821/Reinforce-Pixelcopter-PLE-v0
|
Nienke5821
| 2025-08-12T14:53:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-12T14:53:28Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.90 +/- 25.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755008457
|
calegpedia
| 2025-08-12T14:48:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:48:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
konrud/distilbert-base-uncased-finetuned-imdb
|
konrud
| 2025-08-12T14:45:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-12T06:28:06Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4451
- Model Preparation Time: 0.0036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6767 | 1.0 | 157 | 2.5083 | 0.0036 |
| 2.5718 | 2.0 | 314 | 2.4490 | 0.0036 |
| 2.5362 | 3.0 | 471 | 2.4451 | 0.0036 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
newtts2017/jf8bfi0n
|
newtts2017
| 2025-08-12T14:45:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T14:32:59Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jf8bfi0n
---
# Jf8Bfi0N
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jf8bfi0n` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jf8bfi0n",
"lora_weights": "https://huggingface.co/newtts2017/jf8bfi0n/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('newtts2017/jf8bfi0n', weight_name='lora.safetensors')
image = pipeline('jf8bfi0n').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/newtts2017/jf8bfi0n/discussions) to add images that show off what you’ve made with this LoRA.
|
digitclone/blockassist-bc-restless_patterned_wallaby_1755009713
|
digitclone
| 2025-08-12T14:44:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless patterned wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:44:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless patterned wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755009752
|
roeker
| 2025-08-12T14:43:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:43:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xAgo/blockassist-bc-agile_tough_camel_1755008884
|
0xAgo
| 2025-08-12T14:41:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile tough camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:41:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile tough camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultramit/blockassist-bc-sturdy_mute_opossum_1755009521
|
ultramit
| 2025-08-12T14:39:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy mute opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:39:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy mute opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VANBU/DD
|
VANBU
| 2025-08-12T14:38:40Z | 0 | 0 | null |
[
"license:cc-by-nd-4.0",
"region:us"
] | null | 2025-08-12T14:38:40Z |
---
license: cc-by-nd-4.0
---
|
agurung/v2sft_all_qwen7B_25percent_lr_1e4_allgrad
|
agurung
| 2025-08-12T14:38:36Z | 31 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T19:01:05Z |
---
library_name: transformers
model_name: v2sft_all_qwen7B_25percent_lr_1e4_allgrad
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for v2sft_all_qwen7B_25percent_lr_1e4_allgrad
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="agurung/v2sft_all_qwen7B_25percent_lr_1e4_allgrad", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alexgurung/ncp_reasoning_projector/runs/upr91dfy)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.53.3
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
knowledgator/gliclass-modern-large-v2.0-init
|
knowledgator
| 2025-08-12T14:37:08Z | 663 | 9 | null |
[
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity",
"dataset:cnmoro/QuestionClassification",
"dataset:Arsive/toxicity_classification_jigsaw",
"dataset:shishir-dwi/News-Article-Categorization_IAB",
"dataset:SetFit/qnli",
"dataset:nyu-mll/multi_nli",
"dataset:SetFit/student-question-categories",
"dataset:SetFit/tweet_sentiment_extraction",
"dataset:SetFit/hate_speech18",
"dataset:saattrupdan/doc-nli",
"arxiv:2508.07662",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"region:us"
] |
zero-shot-classification
| 2025-02-13T23:17:30Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit/student-question-categories
- SetFit/tweet_sentiment_extraction
- SetFit/hate_speech18
- saattrupdan/doc-nli
language:
- en
- fr
- ge
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
base_model:
- answerdotai/ModernBERT-large
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large), which effectively processes long sequences.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
pip install -U transformers>=4.48.0
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-large-v2.0-init", add_prefix_space=True)
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-modern-large-v2.0-init (399 M)](knowledgator/gliclass-modern-large-v2.0-init) | 0.9137 | 0.7357 | 0.4140 |
| [gliclass-modern-base-v2.0-init (151 M)](knowledgator/gliclass-modern-base-v2.0-init) | 0.8264 | 0.6637 | 0.2985 |
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-base-v1.0-init | gliclass-large-v1.0-init | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|
| CR | 0.8672 | 0.8024 | 0.9041 | 0.8980 |
| sst2 | 0.8342 | 0.8734 | 0.9011 | 0.9434 |
| sst5 | 0.2048 | 0.1638 | 0.1972 | 0.1123 |
| 20_news_groups | 0.2317 | 0.4151 | 0.2448 | 0.2792 |
| spam | 0.5963 | 0.5407 | 0.5074 | 0.6364 |
| financial_phrasebank | 0.3594 | 0.3705 | 0.2537 | 0.2562 |
| imdb | 0.8772 | 0.8836 | 0.8255 | 0.9137 |
| ag_news | 0.5614 | 0.7069 | 0.6050 | 0.6933 |
| emotion | 0.2865 | 0.3840 | 0.2474 | 0.3746 |
| cap_sotu | 0.3966 | 0.4353 | 0.2929 | 0.2919 |
| rotten_tomatoes | 0.6626 | 0.7933 | 0.6630 | 0.5928 |
| **AVERAGE:** | 0.5344 | 0.5790 | 0.5129 | 0.5447 |
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | ag_news | emotion | **AVERAGE:** |
|------------------------------------|------------------|--------|---------|--------------|----------|
| gliclass-modern-large-v2.0-init | 0 | 0.1123 | 0.6933 | 0.3746 | 0.3934 |
| gliclass-modern-large-v2.0-init | 8 | 0.5098 | 0.8339 | 0.5010 | 0.6149 |
| gliclass-modern-large-v2.0-init | Weak Supervision | 0.0951 | 0.6478 | 0.4520 | 0.3983 |
| gliclass-modern-base-v2.0-init | 0 | 0.1972 | 0.6050 | 0.2474 | 0.3499 |
| gliclass-modern-base-v2.0-init | 8 | 0.3604 | 0.7481 | 0.4420 | 0.5168 |
| gliclass-modern-base-v2.0-init | Weak Supervision | 0.1599 | 0.5713 | 0.3216 | 0.3509 |
| gliclass-large-v1.0-init | 0 | 0.1639 | 0.7069 | 0.3840 | 0.4183 |
| gliclass-large-v1.0-init | 8 | 0.4226 | 0.8415 | 0.4886 | 0.5842 |
| gliclass-large-v1.0-init | Weak Supervision | 0.1689 | 0.7051 | 0.4586 | 0.4442 |
| gliclass-base-v1.0-init | 0 | 0.2048 | 0.5614 | 0.2865 | 0.3509 |
| gliclass-base-v1.0-init | 8 | 0.2007 | 0.8359 | 0.4856 | 0.5074 |
| gliclass-base-v1.0-init | Weak Supervision | 0.0681 | 0.6627 | 0.3066 | 0.3458 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755007473
|
wasabuko
| 2025-08-12T14:36:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:33:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
knowledgator/gliclass-modern-base-v2.0-init
|
knowledgator
| 2025-08-12T14:36:53Z | 15,385 | 23 | null |
[
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity",
"dataset:cnmoro/QuestionClassification",
"dataset:Arsive/toxicity_classification_jigsaw",
"dataset:shishir-dwi/News-Article-Categorization_IAB",
"dataset:SetFit/qnli",
"dataset:nyu-mll/multi_nli",
"dataset:SetFit/student-question-categories",
"dataset:SetFit/tweet_sentiment_extraction",
"dataset:SetFit/hate_speech18",
"dataset:saattrupdan/doc-nli",
"arxiv:2508.07662",
"base_model:answerdotai/ModernBERT-base",
"base_model:quantized:answerdotai/ModernBERT-base",
"license:apache-2.0",
"region:us"
] |
zero-shot-classification
| 2025-02-13T12:45:44Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit/student-question-categories
- SetFit/tweet_sentiment_extraction
- SetFit/hate_speech18
- saattrupdan/doc-nli
language:
- en
- fr
- ge
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
base_model:
- answerdotai/ModernBERT-base
---
# ⭐ [GLiClass](https://github.com/Knowledgator/GLiClass): Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base), which effectively processes long sequences.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
pip install -U transformers>=4.48.0
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-base-v2.0-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-base-v2.0-init", add_prefix_space=True)
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-modern-large-v2.0-init (399 M)](https://huggingface.co/knowledgator/gliclass-modern-large-v2.0-init) | 0.9137 | 0.7357 | 0.4140 |
| [gliclass-modern-base-v2.0-init (151 M)](https://huggingface.co/knowledgator/gliclass-modern-base-v2.0-init) | 0.8264 | 0.6637 | 0.2985 |
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-base-v1.0-init | gliclass-large-v1.0-init | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|
| CR | 0.8672 | 0.8024 | 0.9041 | 0.8980 |
| sst2 | 0.8342 | 0.8734 | 0.9011 | 0.9434 |
| sst5 | 0.2048 | 0.1638 | 0.1972 | 0.1123 |
| 20_news_groups | 0.2317 | 0.4151 | 0.2448 | 0.2792 |
| spam | 0.5963 | 0.5407 | 0.5074 | 0.6364 |
| financial_phrasebank | 0.3594 | 0.3705 | 0.2537 | 0.2562 |
| imdb | 0.8772 | 0.8836 | 0.8255 | 0.9137 |
| ag_news | 0.5614 | 0.7069 | 0.6050 | 0.6933 |
| emotion | 0.2865 | 0.3840 | 0.2474 | 0.3746 |
| cap_sotu | 0.3966 | 0.4353 | 0.2929 | 0.2919 |
| rotten_tomatoes | 0.6626 | 0.7933 | 0.6630 | 0.5928 |
| **AVERAGE:** | 0.5344 | 0.5790 | 0.5129 | 0.5447 |
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | ag_news | emotion | **AVERAGE:** |
|------------------------------------|------------------|--------|---------|--------------|----------|
| gliclass-modern-large-v2.0-init | 0 | 0.1123 | 0.6933 | 0.3746 | 0.3934 |
| gliclass-modern-large-v2.0-init | 8 | 0.5098 | 0.8339 | 0.5010 | 0.6149 |
| gliclass-modern-large-v2.0-init | Weak Supervision | 0.0951 | 0.6478 | 0.4520 | 0.3983 |
| gliclass-modern-base-v2.0-init | 0 | 0.1972 | 0.6050 | 0.2474 | 0.3499 |
| gliclass-modern-base-v2.0-init | 8 | 0.3604 | 0.7481 | 0.4420 | 0.5168 |
| gliclass-modern-base-v2.0-init | Weak Supervision | 0.1599 | 0.5713 | 0.3216 | 0.3509 |
| gliclass-large-v1.0-init | 0 | 0.1639 | 0.7069 | 0.3840 | 0.4183 |
| gliclass-large-v1.0-init | 8 | 0.4226 | 0.8415 | 0.4886 | 0.5842 |
| gliclass-large-v1.0-init | Weak Supervision | 0.1689 | 0.7051 | 0.4586 | 0.4442 |
| gliclass-base-v1.0-init | 0 | 0.2048 | 0.5614 | 0.2865 | 0.3509 |
| gliclass-base-v1.0-init | 8 | 0.2007 | 0.8359 | 0.4856 | 0.5074 |
| gliclass-base-v1.0-init | Weak Supervision | 0.0681 | 0.6627 | 0.3066 | 0.3458 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
MarcosBarrera/gemma-product-description
|
MarcosBarrera
| 2025-08-12T14:36:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-07-23T16:52:34Z |
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MarcosBarrera/gemma-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
knowledgator/gliclass-base-v2.0-rac-init
|
knowledgator
| 2025-08-12T14:36:31Z | 1,952 | 9 | null |
[
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity",
"dataset:cnmoro/QuestionClassification",
"dataset:Arsive/toxicity_classification_jigsaw",
"dataset:shishir-dwi/News-Article-Categorization_IAB",
"dataset:SetFit/qnli",
"dataset:nyu-mll/multi_nli",
"dataset:SetFit/student-question-categories",
"dataset:SetFit/tweet_sentiment_extraction",
"dataset:SetFit/hate_speech18",
"dataset:saattrupdan/doc-nli",
"dataset:knowledgator/gliclass-v2.0-RAC",
"arxiv:2508.07662",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:apache-2.0",
"region:us"
] |
zero-shot-classification
| 2025-02-17T12:41:55Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit/student-question-categories
- SetFit/tweet_sentiment_extraction
- SetFit/hate_speech18
- saattrupdan/doc-nli
- knowledgator/gliclass-v2.0-RAC
language:
- en
- fr
- ge
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
base_model:
- microsoft/deberta-v3-base
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
### Retrieval-augmented Classification (RAC):
The main idea of this model is to utilize the information from semantically similar examples to enhance predictions in inference. The tests showed that providing the model with at least one example from the train dataset, which was retrieved by semantic similarity, could increase the F1 score from 0.3090 to 0.4275, in some cases from 0.2594 up to 0.6249. Moreover, the RAC approach, with 2 examples provided, showed an F1 score, compared to fine-tuning with 8 examples per label: 0.4707 and 0.4838, respectively.
### RAC dataset generation strategy:


To further enhance classification performance, we generated a Retrieval-Augmented Classification (RAC) dataset. Each text example in the gliclass-v2.0 dataset was encoded using the paraphrase-MiniLM-L6-v2 sentence transformer and indexed in an HNSW (Hierarchical Navigable Small World) database. For 250k randomly selected samples, we retrieved up to three most similar examples (cosine similarity > 0.5) from the dataset.
During augmentation:
- The number of retrieved examples per sample was randomly chosen between 1 and 3.
- 30% of retrieved examples were replaced with random, unrelated examples to introduce controlled noise.
- If true labels were present in a retrieved example, false labels were removed with a 50% probability to balance information clarity.
Each retrieved example was formatted using structured ```<<EXAMPLE>> ... <</EXAMPLE>>``` tags, where:
- True labels were explicitly marked as ```<<TRUE_LABEL>> {label}```.
- False labels were marked as ```<<FALSE_LABEL>> {label}```, unless removed.
For each randomly selected 250k examples, the “text” was modified as ```{original_text} <<EXAMPLE>> {retrieved_text} {true_labels_str} {false_labels_str} <</EXAMPLE>>...```
Where:
- ```{original_text}``` is the original example text.
- ```{retrieved_text}``` is a similar or randomly selected example.
- ```{true_labels_str}``` contains true labels formatted as ```<<TRUE_LABEL>> {label}```.
- ```{false_labels_str}``` contains false labels formatted as ```<<FALSE_LABEL>> {label}``` (unless removed with 50% probability).
Such a strategy allows the model to learn how to utilize the provided information without overfocusing on RAC examples. With both relevant and randomly retrieved examples, the dataset maintains a balance between useful contextual information and controlled noise. This ensures that the model does not become overly reliant on retrieval-augmented inputs while still benefiting from additional context when available.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-base-v2.0-rac-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-base-v2.0-rac-init")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
To use with one **RAC** example:
```python
example_1 = {
"text": "A recently developed machine learning platform offers robust automation for complex data analysis workflows. While it enhances productivity, users have reported difficulties in integrating it with their current data infrastructure and a need for better documentation.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", 'automation']
}
text = "The new AI-powered tool streamlines data analysis by automating repetitive tasks, improving efficiency for data scientists. However, its steep learning curve and limited integration with existing platforms pose challenges for widespread adoption."
labels = ["AI", "automation", "data_analysis", "usability", "integration"]
results = pipeline(text, labels, threshold=0.1, rac_examples=[example_1])[0]
for predict in results:
print(predict["label"], " - ", predict["score"])
```
To use with several **RAC** examples:
```python
example_1 = {
"text": "A recently developed machine learning platform offers robust automation for complex data analysis workflows. While it enhances productivity, users have reported difficulties in integrating it with their current data infrastructure and a need for better documentation.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", 'automation']
}
example_2 = {
"text": "A cloud-based analytics tool leverages artificial intelligence to provide real-time insights. It significantly improves workflow efficiency but struggles with compatibility across different enterprise systems, requiring additional customization efforts.",
"all_labels": ["AI", "automation", "data_analysis", "usability", "integration"],
"true_labels": ["AI", "integration", "data_analysis"]
}
text = "The new AI-powered tool streamlines data analysis by automating repetitive tasks, improving efficiency for data scientists. However, its steep learning curve and limited integration with existing platforms pose challenges for widespread adoption."
labels = ["AI", "automation", "data_analysis", "usability", "integration"]
results = pipeline(text, labels, threshold=0.1, rac_examples=[example_1, example_2])[0]
for predict in results:
print(predict["label"], " - ", predict["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can find a comparison with other GLiClass models:
| Dataset | gliclass-base-v1.0-init | gliclass-large-v1.0-init | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init | gliclass-base-v2.0-rac-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|---------------------|
| CR | 0.8672 | 0.8024 | 0.9041 | 0.8980 | 0.7852 |
| sst2 | 0.8342 | 0.8734 | 0.9011 | 0.9434 | 0.8610 |
| sst5 | 0.2048 | 0.1638 | 0.1972 | 0.1123 | 0.0598 |
| 20_news_groups | 0.2317 | 0.4151 | 0.2448 | 0.2792 | 0.4007 |
| spam | 0.5963 | 0.5407 | 0.5074 | 0.6364 | 0.6739 |
| financial_phrasebank | 0.3594 | 0.3705 | 0.2537 | 0.2562 | 0.2537 |
| imdb | 0.8772 | 0.8836 | 0.8255 | 0.9137 | 0.8716 |
| ag_news | 0.5614 | 0.7069 | 0.6050 | 0.6933 | 0.6759 |
| emotion | 0.2865 | 0.3840 | 0.2474 | 0.3746 | 0.4160 |
| cap_sotu | 0.3966 | 0.4353 | 0.2929 | 0.2919 | 0.3871 |
| rotten_tomatoes | 0.6626 | 0.7933 | 0.6630 | 0.5928 | 0.7739 |
| **AVERAGE:** | 0.5344 | 0.5790 | 0.5129 | 0.5447 | 0.5598 |
Here you can see how the performance of the model grows, providing more **RAC** examples:
| Dataset | 0 examples | 1 example | 2 examples | 3 examples |
|-------------------------------------|------------|------------|------------|------------|
| cap_sotu | 0.3857 | 0.4665 | 0.4935 | 0.4847 |
| cap_sotu (8 examples) | 0.4938 | 0.5097 | 0.4976 | 0.4894 |
| cap_sotu (Weak Supervision - 8) | 0.4319 | 0.4764 | 0.4488 | 0.4465 |
| dair-ai_emotion | 0.4472 | 0.5505 | 0.5619 | 0.5705 |
| dair-ai_emotion (8 examples) | 0.5088 | 0.5630 | 0.5623 | 0.5740 |
| dair-ai_emotion (Weak Supervision - 8) | 0.4187 | 0.5479 | 0.5693 | 0.5828 |
| ag_news | 0.6791 | 0.8507 | 0.8717 | 0.8866 |
| ag_news (8 examples) | 0.8496 | 0.9002 | 0.9072 | 0.9091 |
| ag_news (Weak Supervision - 8) | 0.6546 | 0.8623 | 0.8841 | 0.8978 |
| sst5 | 0.0599 | 0.0675 | 0.1163 | 0.1267 |
| sst5 (8 examples) | 0.2887 | 0.2690 | 0.2642 | 0.2394 |
| sst5 (Weak Supervision - 8) | 0.0744 | 0.2780 | 0.2897 | 0.2912 |
| ScienceQA | 0.1142 | 0.4035 | 0.4534 | 0.4495 |
| ScienceQA (8 examples) | 0.6493 | 0.6547 | 0.6956 | 0.6770 |
| ScienceQA (Weak Supervision - 8) | 0.2987 | 0.5919 | 0.5998 | 0.5674 |
| Malicious_code_classification | 0.3717 | 0.6260 | 0.9672 | 0.9788 |
| Malicious_code_classification (8 examples) | 0.8444 | 0.9722 | 0.9788 | 0.9772 |
| Malicious_code_classification (Weak Supervision - 8) | 0.3745 | 0.9216 | 0.9788 | 0.9772 |
| twitter-financial-news-topic | 0.2594 | 0.6249 | 0.6408 | 0.6427 |
| twitter-financial-news-topic (8 examples) | 0.6137 | 0.7072 | 0.7099 | 0.6948 |
| twitter-financial-news-topic (Weak Supervision - 8) | 0.4032 | 0.6651 | 0.6316 | 0.6114 |
| 20_newsgroups | 0.3211 | 0.1339 | 0.0906 | 0.1005 |
| 20_newsgroups (8 examples) | 0.0959 | 0.0657 | 0.0440 | 0.0445 |
| 20_newsgroups (Weak Supervision - 8) | 0.4765 | 0.1035 | 0.0775 | 0.0777 |
| ChemProt | 0.2024 | 0.1911 | 0.1568 | 0.1329 |
| ChemProt (8 examples) | 0.2985 | 0.3479 | 0.3636 | 0.3538 |
| ChemProt (Weak Supervision - 8) | 0.2369 | 0.2067 | 0.1911 | 0.1780 |
| **AVERAGE:** | **0 examples** | **1 example** | **2 examples** | **3 examples** |
|-------------------------------------|---------------|---------------|---------------|---------------|
| Standard | 0.3090 | 0.4275 | 0.4707 | 0.4718 |
| 8 examples | 0.4838 | 0.5245 | 0.5288 | 0.5244 |
| Weak Supervision - 8 | 0.3661 | 0.4862 | 0.4868 | 0.4821 |
Here you can see how the performance of the model grows, providing more examples in comparison to other models:
| Model | Num Examples | sst5 | ag_news | emotion | **AVERAGE:** |
|------------------------------------|------------------|--------|---------|--------------|----------|
| gliclass-base-v2.0-rac-init | 0 | 0.0599 | 0.6791 | 0.4472 | 0.3934 |
| gliclass-base-v2.0-rac-init | 8 | 0.2887 | 0.8496 | 0.5088 | 0.6149 |
| gliclass-base-v2.0-rac-init | Weak Supervision | 0.0744 | 0.6546 | 0.4187 | 0.3983 |
| gliclass-modern-large-v2.0-init | 0 | 0.1123 | 0.6933 | 0.3746 | 0.3934 |
| gliclass-modern-large-v2.0-init | 8 | 0.5098 | 0.8339 | 0.5010 | 0.6149 |
| gliclass-modern-large-v2.0-init | Weak Supervision | 0.0951 | 0.6478 | 0.4520 | 0.3983 |
| gliclass-modern-base-v2.0-init | 0 | 0.1972 | 0.6050 | 0.2474 | 0.3499 |
| gliclass-modern-base-v2.0-init | 8 | 0.3604 | 0.7481 | 0.4420 | 0.5168 |
| gliclass-modern-base-v2.0-init | Weak Supervision | 0.1599 | 0.5713 | 0.3216 | 0.3509 |
| gliclass-large-v1.0-init | 0 | 0.1639 | 0.7069 | 0.3840 | 0.4183 |
| gliclass-large-v1.0-init | 8 | 0.4226 | 0.8415 | 0.4886 | 0.5842 |
| gliclass-large-v1.0-init | Weak Supervision | 0.1689 | 0.7051 | 0.4586 | 0.4442 |
| gliclass-base-v1.0-init | 0 | 0.2048 | 0.5614 | 0.2865 | 0.3509 |
| gliclass-base-v1.0-init | 8 | 0.2007 | 0.8359 | 0.4856 | 0.5074 |
| gliclass-base-v1.0-init | Weak Supervision | 0.0681 | 0.6627 | 0.3066 | 0.3458 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
knowledgator/gliclass-modern-base-v2.0
|
knowledgator
| 2025-08-12T14:36:13Z | 1,242 | 1 | null |
[
"safetensors",
"GLiClass",
"dataset:knowledgator/events_classification_biotech",
"dataset:knowledgator/Scientific-text-classification",
"arxiv:2508.07662",
"license:apache-2.0",
"region:us"
] | null | 2025-03-27T12:48:20Z |
---
license: apache-2.0
datasets:
- knowledgator/events_classification_biotech
- knowledgator/Scientific-text-classification
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base), which effectively processes long sequences.
The model was fine-tuned using a new RL-based approach to classification, with F1 and recall rewards.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
pip install -U transformers>=4.48.0
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-base-v2.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-base-v2.0", add_prefix_space=True)
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-modern-large-v2.0-init (399 M)](knowledgator/gliclass-modern-large-v2.0-init) | 0.9137 | 0.7357 | 0.4140 |
| [gliclass-modern-base-v2.0-init (151 M)](knowledgator/gliclass-modern-base-v2.0-init) | 0.8264 | 0.6637 | 0.2985 |
| [gliclass-modern-large-v2.0 (399 M)](knowledgator/gliclass-modern-large-v2.0) | 0.9448 | 0.736 | 0.4970 |
| [gliclass-modern-base-v2.0 (151 M)](knowledgator/gliclass-modern-base-v2.0) | 0.9188 | 0.7089 | 0.4250 |
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-modern-base-v2.0 | gliclass-modern-large-v2.0 | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|
| CR | 0.8976 | 0.9198 | 0.9041 | 0.8980 |
| sst2 | 0.8525 | 0.9318 | 0.9011 | 0.9434 |
| sst5 | 0.2348 | 0.2147 | 0.1972 | 0.1123 |
| 20_news_groups | 0.351 | 0.3755 | 0.2448 | 0.2792 |
| spam | 0.483 | 0.6608 | 0.5074 | 0.6364 |
| financial_phrasebank | 0.3475 | 0.3157 | 0.2537 | 0.2562 |
| imdb | 0.9188 | 0.9448 | 0.8255 | 0.9137 |
| ag_news | 0.6835 | 0.7025 | 0.6050 | 0.6933 |
| emotion | 0.3925 | 0.4325 | 0.2474 | 0.3746 |
| cap_sotu | 0.3725 | 0.4157 | 0.2929 | 0.2919 |
| rotten_tomatoes | 0.6955 | 0.7357 | 0.6630 | 0.5928 |
| **AVERAGE:** | 0.5563 | 0.6045 | 0.5129 | 0.5447 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
knowledgator/gliclass-modern-large-v2.0
|
knowledgator
| 2025-08-12T14:35:58Z | 4,946 | 2 | null |
[
"safetensors",
"GLiClass",
"dataset:knowledgator/events_classification_biotech",
"dataset:knowledgator/Scientific-text-classification",
"arxiv:2508.07662",
"license:apache-2.0",
"region:us"
] | null | 2025-03-27T12:48:38Z |
---
license: apache-2.0
datasets:
- knowledgator/events_classification_biotech
- knowledgator/Scientific-text-classification
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic and licensed data that allow commercial use and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language. The backbone model is [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base), which effectively processes long sequences.
The model was fine-tuned using a new RL-based approach to classification, with F1 and recall rewards.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
pip install -U transformers>=4.48.0
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-modern-large-v2.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-modern-large-v2.0", add_prefix_space=True)
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
If you want to use it for NLI type of tasks, we recommend representing your premise as a text and hypothesis as a label, you can put several hypotheses, but the model works best with a single input hypothesis.
```python
# Initialize model and multi-label pipeline
text = "The cat slept on the windowsill all afternoon"
labels = ["The cat was awake and playing outside."]
results = pipeline(text, labels, threshold=0.0)[0]
print(results)
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-modern-large-v2.0-init (399 M)](knowledgator/gliclass-modern-large-v2.0-init) | 0.9137 | 0.7357 | 0.4140 |
| [gliclass-modern-base-v2.0-init (151 M)](knowledgator/gliclass-modern-base-v2.0-init) | 0.8264 | 0.6637 | 0.2985 |
| [gliclass-modern-large-v2.0 (399 M)](knowledgator/gliclass-modern-large-v2.0) | 0.9448 | 0.736 | 0.4970 |
| [gliclass-modern-base-v2.0 (151 M)](knowledgator/gliclass-modern-base-v2.0) | 0.9188 | 0.7089 | 0.4250 |
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-modern-base-v2.0 | gliclass-modern-large-v2.0 | gliclass-modern-base-v2.0-init | gliclass-modern-large-v2.0-init |
|----------------------|-----------------------|-----------------------|---------------------|---------------------|
| CR | 0.8976 | 0.9198 | 0.9041 | 0.8980 |
| sst2 | 0.8525 | 0.9318 | 0.9011 | 0.9434 |
| sst5 | 0.2348 | 0.2147 | 0.1972 | 0.1123 |
| 20_news_groups | 0.351 | 0.3755 | 0.2448 | 0.2792 |
| spam | 0.483 | 0.6608 | 0.5074 | 0.6364 |
| financial_phrasebank | 0.3475 | 0.3157 | 0.2537 | 0.2562 |
| imdb | 0.9188 | 0.9448 | 0.8255 | 0.9137 |
| ag_news | 0.6835 | 0.7025 | 0.6050 | 0.6933 |
| emotion | 0.3925 | 0.4325 | 0.2474 | 0.3746 |
| cap_sotu | 0.3725 | 0.4157 | 0.2929 | 0.2919 |
| rotten_tomatoes | 0.6955 | 0.7357 | 0.6630 | 0.5928 |
| **AVERAGE:** | 0.5563 | 0.6045 | 0.5129 | 0.5447 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
kota123123/blockassist-bc-lithe_leggy_macaw_1755008158
|
kota123123
| 2025-08-12T14:34:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lithe leggy macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:33:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lithe leggy macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
knowledgator/gliclass-qwen-1.5B-v1.0
|
knowledgator
| 2025-08-12T14:33:41Z | 105 | 2 | null |
[
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"arxiv:2508.07662",
"license:apache-2.0",
"region:us"
] |
zero-shot-classification
| 2024-09-10T07:38:16Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
This version of the model utilize the [LLM2Vec](https://github.com/McGill-NLP/llm2vec/tree/main/llm2vec) approach for converting modern decoders to bi-directional encoder. It brings the following benefits:
* Enhanced performance and generalization capabilities;
* Support for Flash Attention;
* Extended context window.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
To use this particular Qwen-based model you need different `transformers` package version than llm2vec requires, so install it manually:
```bash
pip install transformers==4.44.1
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-qwen-1.5B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-qwen-1.5B-v1.0")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
While the model is some how comparable to DeBERTa version in zero-shot setting, it demonstrates state-of-the-art performance in few-shot setting.

### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
knowledgator/gliclass-large-v1.0
|
knowledgator
| 2025-08-12T14:32:21Z | 906 | 5 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"arxiv:2508.07662",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-07-03T15:21:45Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-large-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-large-v1.0")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-small-v1.0-lw | gliclass-base-v1.0-lw | gliclass-large-v1.0-lw | gliclass-small-v1.0 | gliclass-base-v1.0 | gliclass-large-v1.0 |
|----------------------|-----------------------|-----------------------|-----------------------|---------------------|---------------------|---------------------|
| CR | 0.8886 | 0.9097 | 0.9226 | 0.8824 | 0.8942 | 0.9219 |
| sst2 | 0.8392 | 0.8987 | 0.9247 | 0.8518 | 0.8979 | 0.9269 |
| sst5 | 0.2865 | 0.3779 | 0.2891 | 0.2424 | 0.2789 | 0.3900 |
| 20_news_groups | 0.4572 | 0.3953 | 0.4083 | 0.3366 | 0.3576 | 0.3863 |
| spam | 0.5118 | 0.5126 | 0.3642 | 0.4089 | 0.4938 | 0.3661 |
| rotten_tomatoes | 0.8015 | 0.8429 | 0.8807 | 0.7987 | 0.8508 | 0.8808 |
| massive | 0.3180 | 0.4635 | 0.5606 | 0.2546 | 0.1893 | 0.4376 |
| banking | 0.1768 | 0.4396 | 0.3317 | 0.1374 | 0.2077 | 0.2847 |
| yahoo_topics | 0.4686 | 0.4784 | 0.4760 | 0.4477 | 0.4516 | 0.4921 |
| financial_phrasebank | 0.8665 | 0.8880 | 0.9044 | 0.8901 | 0.8955 | 0.8735 |
| imdb | 0.9048 | 0.9351 | 0.9429 | 0.8982 | 0.9238 | 0.9333 |
| ag_news | 0.7252 | 0.6985 | 0.7559 | 0.7242 | 0.6848 | 0.7503 |
| dair_emotion | 0.4012 | 0.3516 | 0.3951 | 0.3450 | 0.2357 | 0.4013 |
| capsotu | 0.3794 | 0.4643 | 0.4749 | 0.3432 | 0.4375 | 0.4644 |
|Average:|0.5732|0.6183|0.6165|0.5401|0.5571|0.6078|
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | spam | massive | banking | ag news | dair emotion | capsotu | Average |
|-----------------------------|--------------|--------|---------|---------|---------|---------|--------------|---------|-------------|
| gliclass-small-v1.0-lw | 0 | 0.2865 | 0.5118 | 0.318 | 0.1768 | 0.7252 | 0.4012 | 0.3794 | 0.3998428571|
| gliclass-base-v1.0-lw | 0 | 0.3779 | 0.5126 | 0.4635 | 0.4396 | 0.6985 | 0.3516 | 0.4643 | 0.4725714286|
| gliclass-large-v1.0-lw | 0 | 0.2891 | 0.3642 | 0.5606 | 0.3317 | 0.7559 | 0.3951 | 0.4749 | 0.4530714286|
| gliclass-small-v1.0 | 0 | 0.2424 | 0.4089 | 0.2546 | 0.1374 | 0.7242 | 0.345 | 0.3432 | 0.3508142857|
| gliclass-base-v1.0 | 0 | 0.2789 | 0.4938 | 0.1893 | 0.2077 | 0.6848 | 0.2357 | 0.4375 | 0.3611 |
| gliclass-large-v1.0 | 0 | 0.39 | 0.3661 | 0.4376 | 0.2847 | 0.7503 | 0.4013 | 0.4644 | 0.4420571429|
| gliclass-small-v1.0-lw | 8 | 0.2709 | 0.84026 | 0.62 | 0.6883 | 0.7786 | 0.449 | 0.4918 | 0.5912657143|
| gliclass-base-v1.0-lw | 8 | 0.4275 | 0.8836 | 0.729 | 0.7667 | 0.7968 | 0.3866 | 0.4858 | 0.6394285714|
| gliclass-large-v1.0-lw | 8 | 0.3345 | 0.8997 | 0.7658 | 0.848 | 0.84843 | 0.5219 | 0.508 | 0.67519 |
| gliclass-small-v1.0 | 8 | 0.3042 | 0.5683 | 0.6332 | 0.7072 | 0.759 | 0.4509 | 0.4434 | 0.5523142857|
| gliclass-base-v1.0 | 8 | 0.3387 | 0.7361 | 0.7059 | 0.7456 | 0.7896 | 0.4323 | 0.4802 | 0.6040571429|
| gliclass-large-v1.0 | 8 | 0.4365 | 0.9018 | 0.77 | 0.8533 | 0.8509 | 0.5061 | 0.4935 | 0.6874428571|
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
knowledgator/gliclass-base-v1.0
|
knowledgator
| 2025-08-12T14:31:58Z | 855 | 3 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"arxiv:2508.07662",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-07-03T15:20:49Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-base-v1.0")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-base-v1.0")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-small-v1.0-lw | gliclass-base-v1.0-lw | gliclass-large-v1.0-lw | gliclass-small-v1.0 | gliclass-base-v1.0 | gliclass-large-v1.0 |
|----------------------|-----------------------|-----------------------|-----------------------|---------------------|---------------------|---------------------|
| CR | 0.8886 | 0.9097 | 0.9226 | 0.8824 | 0.8942 | 0.9219 |
| sst2 | 0.8392 | 0.8987 | 0.9247 | 0.8518 | 0.8979 | 0.9269 |
| sst5 | 0.2865 | 0.3779 | 0.2891 | 0.2424 | 0.2789 | 0.3900 |
| 20_news_groups | 0.4572 | 0.3953 | 0.4083 | 0.3366 | 0.3576 | 0.3863 |
| spam | 0.5118 | 0.5126 | 0.3642 | 0.4089 | 0.4938 | 0.3661 |
| rotten_tomatoes | 0.8015 | 0.8429 | 0.8807 | 0.7987 | 0.8508 | 0.8808 |
| massive | 0.3180 | 0.4635 | 0.5606 | 0.2546 | 0.1893 | 0.4376 |
| banking | 0.1768 | 0.4396 | 0.3317 | 0.1374 | 0.2077 | 0.2847 |
| yahoo_topics | 0.4686 | 0.4784 | 0.4760 | 0.4477 | 0.4516 | 0.4921 |
| financial_phrasebank | 0.8665 | 0.8880 | 0.9044 | 0.8901 | 0.8955 | 0.8735 |
| imdb | 0.9048 | 0.9351 | 0.9429 | 0.8982 | 0.9238 | 0.9333 |
| ag_news | 0.7252 | 0.6985 | 0.7559 | 0.7242 | 0.6848 | 0.7503 |
| dair_emotion | 0.4012 | 0.3516 | 0.3951 | 0.3450 | 0.2357 | 0.4013 |
| capsotu | 0.3794 | 0.4643 | 0.4749 | 0.3432 | 0.4375 | 0.4644 |
|Average:|0.5732|0.6183|0.6165|0.5401|0.5571|0.6078|
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | spam | massive | banking | ag news | dair emotion | capsotu | Average |
|-----------------------------|--------------|--------|---------|---------|---------|---------|--------------|---------|-------------|
| gliclass-small-v1.0-lw | 0 | 0.2865 | 0.5118 | 0.318 | 0.1768 | 0.7252 | 0.4012 | 0.3794 | 0.3998428571|
| gliclass-base-v1.0-lw | 0 | 0.3779 | 0.5126 | 0.4635 | 0.4396 | 0.6985 | 0.3516 | 0.4643 | 0.4725714286|
| gliclass-large-v1.0-lw | 0 | 0.2891 | 0.3642 | 0.5606 | 0.3317 | 0.7559 | 0.3951 | 0.4749 | 0.4530714286|
| gliclass-small-v1.0 | 0 | 0.2424 | 0.4089 | 0.2546 | 0.1374 | 0.7242 | 0.345 | 0.3432 | 0.3508142857|
| gliclass-base-v1.0 | 0 | 0.2789 | 0.4938 | 0.1893 | 0.2077 | 0.6848 | 0.2357 | 0.4375 | 0.3611 |
| gliclass-large-v1.0 | 0 | 0.39 | 0.3661 | 0.4376 | 0.2847 | 0.7503 | 0.4013 | 0.4644 | 0.4420571429|
| gliclass-small-v1.0-lw | 8 | 0.2709 | 0.84026 | 0.62 | 0.6883 | 0.7786 | 0.449 | 0.4918 | 0.5912657143|
| gliclass-base-v1.0-lw | 8 | 0.4275 | 0.8836 | 0.729 | 0.7667 | 0.7968 | 0.3866 | 0.4858 | 0.6394285714|
| gliclass-large-v1.0-lw | 8 | 0.3345 | 0.8997 | 0.7658 | 0.848 | 0.84843 | 0.5219 | 0.508 | 0.67519 |
| gliclass-small-v1.0 | 8 | 0.3042 | 0.5683 | 0.6332 | 0.7072 | 0.759 | 0.4509 | 0.4434 | 0.5523142857|
| gliclass-base-v1.0 | 8 | 0.3387 | 0.7361 | 0.7059 | 0.7456 | 0.7896 | 0.4323 | 0.4802 | 0.6040571429|
| gliclass-large-v1.0 | 8 | 0.4365 | 0.9018 | 0.77 | 0.8533 | 0.8509 | 0.5061 | 0.4935 | 0.6874428571|
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
Aleph-Alpha/tfree-hat-pretrained-7b-base
|
Aleph-Alpha
| 2025-08-12T14:31:27Z | 27 | 0 | null |
[
"safetensors",
"hierarchical_autoregressive_transformer",
"Aleph Alpha Research",
"pytorch",
"Hierarchical Autoregressive Transformer",
"HAT",
"custom_code",
"en",
"de",
"arxiv:2501.10322",
"license:other",
"region:us"
] | null | 2025-07-31T14:36:50Z |
---
language:
- en
- de
license: other
thumbnail: https://huggingface.co/Aleph-Alpha/Llama-TFree-HAT-Pretrained-7B-DPO/raw/main/source/aleph_alpha_logo_thumbnail.png
license_name: open-aleph-license
license_link: LICENSE
tags:
- Aleph Alpha Research
- pytorch
- Hierarchical Autoregressive Transformer
- HAT
model-index:
- name: TFree-HAT-Pretrained-7B-Base
results: []
---
<div align="center">
<img src="source/aleph_alpha_logo.svg" width="60%" alt="Aleph Alpha Research Logo" />
</div>
<div align="center" style="line-height: 1;">
<a href="https://aleph-alpha.com/research/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="source/aleph_alpha_homepage_badge.svg" style="display: inline-block; vertical-align: middle;" />
</a>
<a href="https://huggingface.co/Aleph-Alpha" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-AlephAlpha%20Research-e3ff00?color=e3ff00&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://twitter.com/Aleph__Alpha" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-AlephAlpha_Research-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/Aleph-Alpha/TFree-HAT-Pretrained-7B-Base/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-Open Aleph License-white?&color=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<hr>
# TFree-HAT-Pretrained-7B-Base
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
This model card provides an overview of our **TFree-HAT-Pretrained-7B-Base** model , which is a foundation model developed by Aleph Alpha Research* and publicly available under the Open Aleph License, a license explicitly allowing for non-commercial research and educational use.
The model is based on our Hierarchical Autoregressive Transformer (HAT) architecture which is described originally in our [paper](https://arxiv.org/abs/2501.10322). This novel architecture integrates character-level encoding and decoding with the word-level backbone, allowing for improved text compression (less sequence positions) and performance in the languages it has been trained on, and potentially higher robustness to prompt changes, as well as improved adaptability to new languages & domains via fine-tuning.
The model was pre-trained in English & German and adapted to a maximum context length of 32900 words on carefully curated data in compliance with applicable EU and national regulations, including copyright and data privacy laws. It shows strong proficiency in German, while also beating Llama 3.1 on many benchmarks in English.
A model post-trained and direct-preference-optimized for English & German starting from this base model can be found under this [Link](https://huggingface.co/Aleph-Alpha/llama-tfree-hat-pretrained-7b-dpo).
# Model Access
We provide access to our models through the channels listed below.
- **HuggingFace**: The model’s weights as well as basic inference implementation are available on HuggingFace under the [Open Aleph License](https://github.com/Aleph-Alpha/.github/blob/main/oal.pdf), a license explicitly allowing for non-commercial research and educational use.
We do not collect PII (personally identifiable information) for any of these channels. We do not log user inputs to the models. We do not train on user data.
**Note**: The same models are made available to users regardless of their geographic location and their input language but subject to sanction regimes, technology export regulations, and other restrictions that may apply. The same offering is provided to all countries within and external to the European Union if no legal restrictions apply.
# How to use
## Inference
We provide an inference module compatible with HuggingFace Transformers for running model inference. We recommend pinning the transformers library to version 4.46.3. Before executing the inference example below, make sure the [hat-splitter package](https://pypi.org/project/hat-splitter/) is installed in your environment.
```shell
pip install 'hat-splitter>=0.1.9' 'transformers==4.46.3' torch
pip install flash_attn
```
Download model weights and run inference using the following example:
```python
import torch
from transformers import AutoModelForCausalLM
INPUT ="When was Rome founded?"
MODEL_ID = "Aleph-Alpha/TFree-HAT-Pretrained-7B-Base"
model = AutoModelForCausalLM.from_pretrained(
trust_remote_code=True,
pretrained_model_name_or_path=MODEL_ID,
attn_implementation="flash_attention_2",
).to("cuda", torch.bfloat16)
input_ids, cumulative_word_lengths = model._prepare_input(INPUT)
model_output = model.generate(
input_ids,
cumulative_seq_lengths_per_word=cumulative_word_lengths,
max_new_tokens=300,
use_cache=False,
)
print("Prompt: ", INPUT)
print("Completion: ", model_output.completion_text)
```
Please note that the realized inference speed strongly depends on the maturity of the inference implementation beyond the intrinsic text compression of any model. Besides this huggingface transformers-based inference solution, we are also releasing a [vLLM-based inference solution](https://github.com/Aleph-Alpha/vllm) for our models that is optimized for batched inference. Please note that this vLLM inference for HAT is still under active development.
# Evaluation
**Performance**: Our T-Free models deliver performance on par with strong tokenizer-based models such as [Llama 3.1 8B Base](https://huggingface.co/meta-llama/Llama-3.1-8B). Respective benchmarks and results can be found in the tables below.
**Efficiency**: Our tokenizer-free approach results in improved text compression, providing a foundation for improved efficiency in inference speed. We measure in terms of words processed across all languages and domains. We define the metric as **tokenizer fertility** or **bytes per sequence position**, where a higher value indicates better performance. Latency and throughput are currently out of scope for research-centric evaluations and will be addressed in the future. Currently, our evaluation framework automatically measures **bytes per sequence position** across datasets, allowing us to derive text compression scores and analyze variations across different dataset distributions. The end to end resulting efficiency is depends on the inference implementation beyond the scope of the here provided inference implementation and reported compression scores.
**Disclaimer**: The results presented below were generated using our internal inference implementation, not the inference module mentioned above. As a sanity check, we reproduced some of the benchmarks using our evaluation framework with the huggingface inference code, but other results might still deviate slightly. We will also make source-available both our evaluation framework and a [high-performance vLLM integration](https://github.com/Aleph-Alpha/vllm) for this model to ensure reproducibility.
**Metric Glossary**
`log_acc`: Average Accuracy Loglikelihood<br>
`norm_log_acc`: Average Normalized Loglikelihood Accuracy<br>
`comp_acc`: Average Completion Accuracy<br>
`norm_prob_mass`: Average Probability Mass Normalized<br>
`bleu`: Average BLEU Score<br>
`rouge_gm`: Average ROUGE-Geometric-Mean<br>
`F1`: Average F1<br>
`CS`: Chatbot Style<br>
`IF`: Instruction Following<br>
`LC`: Language Consistency<br>
`CI`: Concordance Index<br>
`ES`: Exponential Similarity
## Pre-training Benchmarks
| Group | Task | Metric Name | Num Fewshot | [TFree-HAT-Pretrained-7B-Base](https://huggingface.co/Aleph-Alpha/TFree-HAT-Pretrained-7B-Base) | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [TFree-HAT-Pretrained-7B-Base](https://huggingface.co/Aleph-Alpha/TFree-HAT-Pretrained-7B-Base) Compression | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) Compression |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Knowledge | MMLU | `norm_log_acc` | 5 | 0.664 | **0.668** | **5.184** | 4.278 |
| Knowledge | MMLU Pro | `norm_log_acc` | 5 | **0.386** | 0.367 | **4.734** | 3.731 |
| Knowledge | OpenBookQA | `norm_log_acc` | 10 | 0.360 | **0.366** | **4.982** | 4.724 |
| Knowledge | TriviaQA | `comp_acc` | 10 | 0.658 | **0.695** | **5.317** | 4.221 |
| Knowledge | TruthfulQA | `norm_prob_mass` | 6 | **0.306** | 0.279 | **4.945** | 4.197 |
| Reasoning | ARC Challenge | `norm_log_acc` | 25 | **0.587** | 0.538 | **5.514** | 4.924 |
| Reasoning | Winogrande | `norm_log_acc` | 5 | **0.754** | 0.747 | **5.158** | 4.909 |
| German | MMMLU | `norm_log_acc` | 5 | **0.618** | 0.576 | **6.056** | 3.410 |
| German | WMT16 | `bleu` | 5 | 34.405 | **34.998** | **5.968** | 4.210 |
| German | WMT20 | `bleu` | 5 | **33.240** | 32.892 | **6.269** | 4.222 |
| Math | GSM8K | `comp_acc` | 8 | **0.528** | **0.528** | **3.840** | 3.332 |
| Long context | GSM8K | `comp_acc` | 16 | 0.536 | --- | 3.837 | --- |
| Long context | Long Bench v2 | `norm_log_acc` | 10 | 0.336 | --- | 5.125 | --- |
| Long context German | Long Bench v2 | `norm_log_acc` | 10 | 0.233 | --- | 5.872 | --- |
| Safety | Winogender | `norm_log_acc` | 5 | **0.671** | 0.636 | **5.232** | 4.799 |
# Training Details
## Model Architecture
The model uses a hierarchical autoregressive transformer (HAT) architecture consisting of three components: encoder, backbone, and decoder, together with connector layers between components. Encoder, backbone, and decoder are all instances of autoregressive transformers with pre-norm residual blocks in the style of Llama, using a SwiGLU unit as a feed-forward block, with all model parameters active during training and inference. The backbone model uses standard causal attention, while the encoder and decoder use local causal attention with a finite look-back window. The architecture of the backbone largely follows the design of LLama 3.1 8B (with embedding and language modeling head removed and weights randomly initialized). In addition, we added per-head QK-norm in the backbone, which we found important for training stability.
The encoder processes input text as a sequence of UTF-8 bytes and produces a sequence of activations of the same length. This sequence is then split into chunks corresponding to words or other semantic units in the text (this is further explained below). In the encoder-backbone connector layer, for each word, a learned latent vector cross-attends to its corresponding chunk of encoder activations. The resulting sequence of latent vectors then serves as input to the backbone. The backbone processes this latent sequence and produces a sequence of word-level representations. Finally, the decoder module is another transformer that acts on the byte-level activations and has an LM head that produces next-byte probabilities. To make use of the higher level information stored in the word-level embeddings during decoding, another cross-attention mechanism is used. In each transformer block of the decoder, every byte-level position cross-attends to the backbone’s word-level representations that correspond to the words preceding this byte.
## Encoder module
| | **119M** |
| --- | --- |
| Number of layers | 6 |
| Number of attention heads | 8 |
| Head size | 128 |
| Number of Key-Value heads | 8 |
| Hidden size | 1024 |
| Cross-attention hidden size | 4096 |
| MLP expansion factor | 2.75 |
| MLP type | SwiGLU |
| Sequence length | 262144 |
| Position embeddings | RoPE with base 1e5 |
| Attention type | causal, local with window size 768 |
| QK-norm | disabled |
## Backbone module
| | **7B** |
| --- | --- |
| Number of layers | 32 |
| Number of attention heads | 32 |
| Head size | 128 |
| Number of Key-Value heads | 8 |
| Hidden size | 4096 |
| MLP expansion factor | 3.5 |
| MLP type | SwiGLU |
| Sequence length | 32900 |
| Position embeddings | RoPE with base 5e5 |
| Attention type | causal |
| QK-norm | per head |
## Decoder module
| | **94M** |
| --- | --- |
| Number of layers | 4 |
| Number of attention heads | 8 |
| Head size | 128 |
| Number of Key-Value heads | 8 |
| Hidden size | 1024 |
| Cross-attention hidden size | 4096 |
| MLP expansion factor | 2.75 |
| MLP type | SwiGLU |
| Sequence length | 262144 |
| Position embeddings | RoPE with base 1e5 |
| Attention type | causal, local with window size 768 |
| QK-norm | disabled |
**Parameter count**
Total: `7,192,507,136`
Encoder: `119,293,696`
Backbone: `6,979,592,192`
Decoder: `93,621,248`
We note that one distinctive property of our tokenizer-free architectures is that encoder and decoder are substantially smaller than typical embedding and language model head layers of tokenizer-based models. Because of this, while our models share the architecture with Llama 3.1 8B (plus the added QK-norm), they are closer to 7B than 8B parameters in total.
**Word splitter**
To split arbitrary byte sequences, we adopted the guidelines from [UAX #29](https://unicode.org/reports/tr29/), which splits text into words for common Western languages but also produces meaningful semantic units for other types of languages (e.g. Chinese, Japanese, Korean). From now on, we refer to these splits as words.
We also merged leading whitespace and trailing punctuation into the words to reduce sequence length at the word level.
To improve the processing of code and math documents, we made additional adjustments to the Unicode splitter. First, we split instances of camel cases like FooBar into Foo and Bar. Second, we treated math symbols (again by Unicode standard) as separate words.
## Pre-Training
**Approach**
We randomly initialized all model parameters. The model was then trained on the next-byte-prediction objective on a large and diverse document corpus (see below). Initially, we trained on sequences up to 3500 words for a total amount of nearly 4T words. We used global batch-size of 1024 (3.5M words) and followed a warmup-stable-decay schedule with a warmup of 5000 steps, a phase of stable learning rate 2e-3 for 945000 steps and inverse-square-root cooldown to learning rate 0 over the last 50000 steps. We employed weight decay of 0.05 for all parameters except for the embedding and normalization parameters. We employed QK-norm per head and attention logit softcapping at 100, which we found to be important for training stability during pretraining.
We then continued training on sequences of up to 32900 words for another 2500 steps with global batch size 128, totaling to 10.5B words, upweighting longer documents to make use of the extended context. We used warmup-stable-decay learning rate schedule with 500 steps warmup, a phase of stable learning 2e-4, and a final decay to 0 over the last 500 steps. We disabled attention logit softcapping during this long-context adaptation such that it is not required during inference.
The training was conducted in our [Scaling framework](https://github.com/Aleph-Alpha/scaling).
**Data sources**
The model was trained on a filtered subset of diverse corpora of text data including proprietary curated datasets, high-quality web content, public domain sources, German texts, mathematical texts, and programming code. The proportions and sources of data we used in the pre-training were:
English Language Data (70%)
- curated web and synthetic data (63%)
- high quality curated sources such as Wikipedia and public domain books (7%)
German Language Data (7%)
- curated web and synthetic data (6.3%)
- high quality curated sources such as Wikipedia and public domain books (0.7%)
Mathematical Content (5%)
- mathematical code and proofs (2%)
- mathematical word problems and equations (3%)
Programming Code (18%)
- general programming code (11%)
- high-quality and synthetic Python code (7%)
## Data curation
We applied a range of curation techniques, e.g., for German as described in [Aleph-Alpha-GermanWeb](https://huggingface.co/datasets/Aleph-Alpha/Aleph-Alpha-GermanWeb). These include but are not limited to:
- URL filtering. We used a URL filter developed to filter out fraudulent, harmful, and illegal content from an explicit blocklist, e.g., adult websites, or URLs containing words associated with fraudulent, harmful, or adult content.
- Text extraction. Natural language texts which were embedded HTML and other web programming languages were extracted using the [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse) text extractor.
- Language identification. We used a [fastText language classifier](https://fasttext.cc/docs/en/language-identification.html) trained on character n-grams from Wikipedia to identify, retain, and sort texts into English and German.
- Repetition removal. We applied heuristic methods for detection and removal of repetitions on the line, paragraph, and character level.
- Document- and line-level filtering. We utilized additional document-level heuristics to ensure documents had reasonable numbers and quality of words, naturalistic symbols-to-words and numbers-to-words ratios, not predominantly made up of bullet points, and a sufficient quantity of real words.
- Deduplication. Using exact and fuzzy deduplication to remove duplicate documents.
## Synthetic data
We also generated synthetic data by using permissively-licensed LLMs.
## Legal Compliance
We acknowledge and abide by applicable national and international regulations, including copyright, data privacy, and other related legislation. Any text and data mining by us is performed in compliance with Directive (EU) 2019/790 and its respective national transposition. During the training and fine-tuning of our models, we comply with applicable data privacy laws, including Regulation (EU) 2016/679 (GDPR) and national data privacy regulations. To the extent possible and foreseeable, we also took legislation with forthcoming obligations into account, such as the obligations for General Purpose AI Models under Regulation (EU) 2024/1689 (EU AI Act), and will constantly monitor such developments and adapt our products and this model card accordingly.
# Resource Usage
## Compute & Training Efficiency
The following table shows the compute resources used in the training stages for the 7B models.
| **Model** | **Training phase** | **GPUs** | **Approximate average power consumption per GPU** | **Approximate GPU hours** |
| --- | --- | --- | --- | --- |
| 7B | Pre-training (part 1) | 256 x H200 | 460W | 111,822 |
| 7B | Pre-training (part 2) | 256 x H100 | 460W | 151,289 |
| 7B | Long context adaptation | 256 x H100 | 190W | 5,328 |
## Environmental Impact
Our H200 and A100 infrastructure runs entirely on 100% renewable energy, ensuring that no CO₂ emissions are directly incurred from training. In addition to this, the H200 data center boasts a power usage effectiveness (PUE) of ≤1.2. Its operation also maintains a net-zero water footprint. Specific number on renewable energy usage for the H100 GPUs is not yet available to us.
To estimate the carbon footprint of inference, we base our calculations on publicly available data from the infrastructure provider and, where applicable, standard emissions accounting methodology. We report:
- **Carbon emitted**: GPU runtime emissions
- **Carbon emitted accounting for PUE**: GPU runtime emissions scaled by the data center's PUE
Because the data centers operate fully on renewable energy, both metrics for its operation (excluding infrastructure-related emissions, e.g., initial chip manufacturing) are effectively zero. For H100 GPU infrastructure no information has been made available to us.
| Metric | H200 GPU | H100 GPU | A100 GPU |
| --- | --- | --- | --- |
| Carbon emitted | 0 kg CO₂ | no information available | 0 kg CO₂ |
| Carbon emitted accounting for PUE | 0 kg CO₂ | no information available | 0 kg CO₂ |
## Power Consumption
| GPU Model | Max Power (W) |
| --- | --- |
| A100 | 400 W |
| H100 | 700 W |
| H200 | 700 W |
Numbers may be contextualized with reference to publicly available studies, such as the carbon footprint of language model training.
# Intended Use
These models are intended to be deployed as components of AI systems or applications. Use-cases and the model's capabilities include but are not limited to: text generation, classification, summarization, question answering, and labeling. Note that applications might require additional model adaptations or components for guarding against unwanted application behavior or model output.
## Non-Permitted Use
Our models shall not be used for illegal or unlawful actions of any kind and with any illegal or unlawful content. This includes in particular prohibited practices according to Article 5 of Regulation (EU) 2024/1689 (EU AI Act) and other activities such as engaging in terrorism, violence, human trafficking, illegal distribution of materials to minors, sexual solicitation, any other criminal activities, harassment, discrimination, creating or promoting malicious code or activities risking death or harm, including those related to military or nuclear applications, and activities not in compliance with sanction regimes, technology export regulations, and other restrictions that may apply. The models are to be used following ethical standards. The utilization of our technology is always governed by, and may be limited in accordance with, our Terms and Conditions, the Open Aleph License, or any specific agreement we might have established with you.
Although we do not inspect the requests sent to our API, we regularly review and monitor potential violations that may be related to our models and depending on the circumstances of the specific case take legal action against them. This includes but is not limited to, enforcement to remove published model content, requesting compensation for damages caused, and account termination or removal of credits.
For non-anonymous reports, we also provide an appeals mechanism for usage policy violations via our dedicated contact address [violations@aleph-alpha.com](mailto:violations@aleph-alpha.com) to communicate with us.
Customers and partners are enabled to use our [ticketing system](https://servicedesk.aleph-alpha.de/external) for appeals, claims, and feedback.
# Risks and Limitations
**Note:** Language models are **not agents** and not optimized for prescriptive actions. The use of language models in high-stake environments, for critical decisions or to support a user's wellbeing should be performed with additional guardrails in place.
## Risk Categories
In the following sections, we describe risk categories and provide examples of completions we would consider inappropriate or harmful. We then describe steps to minimize these risks.
**Harmful Language**
Large language models can sometimes generate undesired outputs that are unsuitable for certain applications. This includes producing content with harmful language, discriminative content, inappropriate tone and style, systemic biases, or suggestions that might encourage illegal actions. Such outputs can also include incorrect, outdated information, or material that is not suitable for all ages. While we constantly take efforts to reduce the likelihood of such undesired outputs, this possibility can never be fully ruled out. To minimize these issues, the following strategies can be employed:
- Abide by the guidance on illegal use provided for in this Model Card.
- Crafting prompts carefully to guide the model's output more effectively.
- Utilizing a finetuned model (often referred to as a control or instruct model) that prioritizes using explicitly provided information.
- Employing a finetuned model designed to maintain an appropriate tone and style, including avoiding offensive language.
- Conducting additional validations at the application level to ensure output quality and appropriateness.
### Systemic Biases
Language models obtain world-knowledge from their pre-training data and may therefore exhibit the same systematic biases that are present in the data. Differing deployment scenarios (including differing cultural contexts) can expose systematic biases in different ways. We acknowledge the cultural diversity of communities and users inside and outside the EU. For larger deployments, we encourage users to track systematic biases relevant to their use-case, and we are happy to consult on bespoke fine-tunings to alleviate such biases.
### Outdated World Knowledge
| **Prompt** | **Completion** |
| --- | --- |
| What was the warmest year in human history? | The warmest year in human history, based on global average temperatures, is 2016. However, it's important to note that the ranking of the warmest years can vary slightly depending on the dataset used and the methodology applied. |
Pre-training was performed using a fixed dataset, created at a fixed date in the past. Accordingly, the world knowledge of foundation models is limited to the information contained in its training data. More recent information may not be known to the model or misunderstood when presented as input during live usage. In the above example, the model is convinced that 2016 was the warmest year in human history, however, 2023 ended up breaking that record. This information can not be known by the model due to its knowledge cutoff.
Risks include:
- Generation of personally identifiable information. Models are not explicitly trained to provide such information, but may seem to provide personally identifiable information. This does not necessarily imply the presence of such information in training data, as hallucination is possible.
- Generation of unintended, irrelevant, or repetitive outputs. This includes the production of incorrect or outdated information.
Risks may be mitigated by:
- Injecting context, where relevant.
- Crafting prompts carefully to guide the model's output more effectively.
- Performing validations on the application layer, e.g., classifying the output.
- Using the repetition penalty, especially in the case of repetition, or other parameters available in the API (see [documentation](https://docs.aleph-alpha.com/api/complete/)).
- Avoiding of use cases targeted at retrieval of personally identifiable information.
### Political Bias
Our models have not been optimized to represent a political opinion or take a specific point of view. They may generate outputs that contradict a user's opinion or expectation, e.g., produce hateful, violent or inappropriate, biased, or discriminatory content. Such behavior may be addressed by:
- Crafting prompts carefully to guide the model's output more effectively.
- Performing validations on the application layer, e.g., via Red-Teaming or classifying the output.
### Mistaken for a Human
| **Prompt** | **Completion** |
| --- | --- |
| Good morning, today is such a beautiful day! | Hello! It sounds like you're starting your day on a positive note. I'm glad to hear that the weather is beautiful. How are you feeling today? Is there anything specific you'd like to discuss or any tasks you'd like some help with? Whether it's planning your day, sharing some thoughts, or getting advice on something, I'm here to assist! Enjoy your day! |
Users may attribute human traits to AI models. This also includes the fact that content generated by the model is not explicitly detectable at this point. It is therefore required to:
- Inform end users that they are interacting with or reading output of an AI.
- Design the system in a way that mitigates the impact of unintended interpretation of the output.
### Other Errors
Any AI module can produce errors, even after implementing all the recommended measures. When integrating foundation language models into an application, users should:
- be aware of the risk of (harmful) failure cases and implement the use case in a way that mitigates such risks.
- be aware that foundation models do not contain application logic, e.g., content filters. Enforcement policies relevant to the use case need to be implemented in the application layer.
- avoid unsupervised use in high-stakes environments.
- validate output with adequate measures.
### Mitigation Approach
We specifically tailor model alignment and risk mitigation techniques to each user-facing application built on top of our models, working closely with our customers to refine them according to their unique requirements. Our intention is for these models to undergo further fine-tuning by us and our customers, utilizing their own datasets alongside our support and datasets to ensure suitability for end-user applications, including harm mitigation efforts. Our customers are responsible for adhering to the terms and conditions when aligning the models in their downstream applications.
### Reproducibility
Some inference parameters, e.g., temperature, lead to the random sampling of outputs, which precludes the reproducibility of outputs. Even when such parameters are not in use, outputs may diverge slightly on a numeric level for technical reasons. One may implement the following measures if needed:
- Logging of past model outputs on the application layer (Aleph Alpha Research is not storing any data and/or using any data provided in prompts for the training of its LLMs).
This list of risks, biases, and limitations may not be complete, as improving the understanding and behavior of language models is an ongoing research topic in the AI science community.
\*Aleph Alpha Research refers to Aleph Alpha Research GmbH
[hat-paper]: https://arxiv.org/abs/2501.10322
|
knowledgator/gliclass-base-v1.0-lw
|
knowledgator
| 2025-08-12T14:30:28Z | 214 | 2 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"arxiv:2508.07662",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-07-03T05:59:00Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
This version of the model uses a layer-wise selection of features that enables a better understanding of different levels of language.
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-base-v1.0-lw")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-base-v1.0-lw")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
Below you can find a comparison with other GLiClass models:
| Dataset | gliclass-small-v1.0-lw | gliclass-base-v1.0-lw | gliclass-large-v1.0-lw | gliclass-small-v1.0 | gliclass-base-v1.0 | gliclass-large-v1.0 |
|----------------------|-----------------------|-----------------------|-----------------------|---------------------|---------------------|---------------------|
| CR | 0.8886 | 0.9097 | 0.9226 | 0.8824 | 0.8942 | 0.9219 |
| sst2 | 0.8392 | 0.8987 | 0.9247 | 0.8518 | 0.8979 | 0.9269 |
| sst5 | 0.2865 | 0.3779 | 0.2891 | 0.2424 | 0.2789 | 0.3900 |
| 20_news_groups | 0.4572 | 0.3953 | 0.4083 | 0.3366 | 0.3576 | 0.3863 |
| spam | 0.5118 | 0.5126 | 0.3642 | 0.4089 | 0.4938 | 0.3661 |
| rotten_tomatoes | 0.8015 | 0.8429 | 0.8807 | 0.7987 | 0.8508 | 0.8808 |
| massive | 0.3180 | 0.4635 | 0.5606 | 0.2546 | 0.1893 | 0.4376 |
| banking | 0.1768 | 0.4396 | 0.3317 | 0.1374 | 0.2077 | 0.2847 |
| yahoo_topics | 0.4686 | 0.4784 | 0.4760 | 0.4477 | 0.4516 | 0.4921 |
| financial_phrasebank | 0.8665 | 0.8880 | 0.9044 | 0.8901 | 0.8955 | 0.8735 |
| imdb | 0.9048 | 0.9351 | 0.9429 | 0.8982 | 0.9238 | 0.9333 |
| ag_news | 0.7252 | 0.6985 | 0.7559 | 0.7242 | 0.6848 | 0.7503 |
| dair_emotion | 0.4012 | 0.3516 | 0.3951 | 0.3450 | 0.2357 | 0.4013 |
| capsotu | 0.3794 | 0.4643 | 0.4749 | 0.3432 | 0.4375 | 0.4644 |
|Average:|0.5732|0.6183|0.6165|0.5401|0.5571|0.6078|
Here you can see how the performance of the model grows providing more examples:
| Model | Num Examples | sst5 | spam | massive | banking | ag news | dair emotion | capsotu | Average |
|-----------------------------|--------------|--------|---------|---------|---------|---------|--------------|---------|-------------|
| gliclass-small-v1.0-lw | 0 | 0.2865 | 0.5118 | 0.318 | 0.1768 | 0.7252 | 0.4012 | 0.3794 | 0.3998428571|
| gliclass-base-v1.0-lw | 0 | 0.3779 | 0.5126 | 0.4635 | 0.4396 | 0.6985 | 0.3516 | 0.4643 | 0.4725714286|
| gliclass-large-v1.0-lw | 0 | 0.2891 | 0.3642 | 0.5606 | 0.3317 | 0.7559 | 0.3951 | 0.4749 | 0.4530714286|
| gliclass-small-v1.0 | 0 | 0.2424 | 0.4089 | 0.2546 | 0.1374 | 0.7242 | 0.345 | 0.3432 | 0.3508142857|
| gliclass-base-v1.0 | 0 | 0.2789 | 0.4938 | 0.1893 | 0.2077 | 0.6848 | 0.2357 | 0.4375 | 0.3611 |
| gliclass-large-v1.0 | 0 | 0.39 | 0.3661 | 0.4376 | 0.2847 | 0.7503 | 0.4013 | 0.4644 | 0.4420571429|
| gliclass-small-v1.0-lw | 8 | 0.2709 | 0.84026 | 0.62 | 0.6883 | 0.7786 | 0.449 | 0.4918 | 0.5912657143|
| gliclass-base-v1.0-lw | 8 | 0.4275 | 0.8836 | 0.729 | 0.7667 | 0.7968 | 0.3866 | 0.4858 | 0.6394285714|
| gliclass-large-v1.0-lw | 8 | 0.3345 | 0.8997 | 0.7658 | 0.848 | 0.84843 | 0.5219 | 0.508 | 0.67519 |
| gliclass-small-v1.0 | 8 | 0.3042 | 0.5683 | 0.6332 | 0.7072 | 0.759 | 0.4509 | 0.4434 | 0.5523142857|
| gliclass-base-v1.0 | 8 | 0.3387 | 0.7361 | 0.7059 | 0.7456 | 0.7896 | 0.4323 | 0.4802 | 0.6040571429|
| gliclass-large-v1.0 | 8 | 0.4365 | 0.9018 | 0.77 | 0.8533 | 0.8509 | 0.5061 | 0.4935 | 0.6874428571|
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
DougGran/CyberSeek-8B
|
DougGran
| 2025-08-12T14:30:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T13:43:36Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DougGran
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OscarGD6/qwen2-vl-asr-lora
|
OscarGD6
| 2025-08-12T14:29:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T14:24:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
knowledgator/gliclass-large-v1.0-init
|
knowledgator
| 2025-08-12T14:28:26Z | 91 | 14 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"arxiv:2508.07662",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-06-03T20:04:04Z |
---
license: apache-2.0
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
language:
- en
metrics:
- f1
pipeline_tag: zero-shot-classification
tags:
- text classification
- zero-shot
- small language models
- RAG
- sentiment analysis
---
# ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification
This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path.
It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines.
The model was trained on synthetic data and can be used in commercial applications.
This model wasn't additionally fine-tuned on any dataset except initial (MoritzLaurer/synthetic_zeroshot_mixtral_v0.1).
### How to use:
First of all, you need to install GLiClass library:
```bash
pip install gliclass
```
Than you need to initialize a model and a pipeline:
```python
from gliclass import GLiClassModel, ZeroShotClassificationPipeline
from transformers import AutoTokenizer
model = GLiClassModel.from_pretrained("knowledgator/gliclass-large-v1.0-init")
tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-large-v1.0-init")
pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0')
text = "One day I will see the world!"
labels = ["travel", "dreams", "sport", "science", "politics"]
results = pipeline(text, labels, threshold=0.5)[0] #because we have one text
for result in results:
print(result["label"], "=>", result["score"])
```
### Benchmarks:
Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting.
| Model | IMDB | AG_NEWS | Emotions |
|-----------------------------|------|---------|----------|
| [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 |
| [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 |
| [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 |
| [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 |
| [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 |
| [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
| SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
## Citation
```bibtex
@misc{stepanov2025gliclassgeneralistlightweightmodel,
title={GLiClass: Generalist Lightweight Model for Sequence Classification Tasks},
author={Ihor Stepanov and Mykhailo Shtopko and Dmytro Vodianytskyi and Oleksandr Lukashov and Alexander Yavorskyi and Mykyta Yaroshenko},
year={2025},
eprint={2508.07662},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.07662},
}
```
|
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF
|
mradermacher
| 2025-08-12T14:28:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"grpo",
"open-r1",
"trl",
"en",
"dataset:AIML-TUDA/SLR-Bench",
"base_model:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic",
"base_model:quantized:leonMW/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T14:22:19Z |
---
base_model: leonMW/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic
datasets: AIML-TUDA/SLR-Bench
language:
- en
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- grpo
- open-r1
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/leonMW/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-GSPO-Basic.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ultramit/blockassist-bc-sturdy_mute_opossum_1755008847
|
ultramit
| 2025-08-12T14:28:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy mute opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:28:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy mute opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755006891
|
koloni
| 2025-08-12T14:22:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:22:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755007398
|
Sayemahsjn
| 2025-08-12T14:21:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:21:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/TalkT2-0.1b-GGUF
|
mradermacher
| 2025-08-12T14:20:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"conversational",
"philosophical",
"poetic",
"experimental",
"small",
"best",
"chatbot",
"en",
"base_model:Notbobjoe/TalkT2-0.1b",
"base_model:quantized:Notbobjoe/TalkT2-0.1b",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T14:19:17Z |
---
base_model: Notbobjoe/TalkT2-0.1b
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- code
- conversational
- transformers
- philosophical
- poetic
- experimental
- small
- best
- chatbot
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Notbobjoe/TalkT2-0.1b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#TalkT2-0.1b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TalkT2-0.1b-GGUF/resolve/main/TalkT2-0.1b.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Atotti/Google-USM
|
Atotti
| 2025-08-12T14:19:35Z | 683 | 15 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n_audio",
"feature-extraction",
"arxiv:2303.01037",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-07-02T23:55:53Z |
---
library_name: transformers
license: gemma
---
# Google USM: Extracted Gemma-3n Audio Encoder (USM)
> [!Note]
> このモデルの実態は不明確です。[Introducing Gemma 3n: The developer guide](https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/#:~:text=Gemma%203n%20uses%20an%20advanced%20audio%20encoder%20based%20on%20the%20Universal%20Speech%20Model%20(USM).)には、
> USMに基づくエンコーダーが使用されていると記述されていますが、USMの論文とこのモデルにはいくつかの異なる点が存在します。
> このモデルは0.6Bですが、USMの論文の0.6Bモデルとは層の数が異なります。
> このモデルは Gemma 3n の AudioEncoder であり、本来の USM とは異なる可能性があります。
## Model Description
このモデルは、Googleのマルチモーダルモデル [google/gemma-3n-e2b-it](https://huggingface.co/google/gemma-3n-e2b-it) から、音声エンコーダー部分 (`audio_tower`) のみを抽出したものです。
bf16版:https://huggingface.co/Atotti/google-usm-bf16
アーキテクチャは、論文 [Universal Speech Model](https://arxiv.org/abs/2303.01037) に基づくGemma3nAudioEncoderです。
このエンコーダーは、音声波形データを受け取り、その内容を表現する高次元の特徴量(エンコーディング)のシーケンスに変換する役割を果たします。
## Intended Use
このモデルは単体で音声認識(文字起こし)などを行うものではなく、より大きなモデルのコンポーネントとして使用されることを想定しています。
* マルチモーダルモデルの音声入力部として: 生成AIに音声情報を与えるための特徴量を抽出します。
* 音声分類: このモデルの出力に分類ヘッドを追加して、特定の音声を分類するタスクでファインチューニングします。
## How to Use
### dependencies
```
pip install transformers==4.53.0
```
```python
import torch
import soundfile as sf
from transformers import Gemma3nAudioEncoder, Gemma3nAudioFeatureExtractor
encoder_id = "Atotti/google-usm"
source_model_id = "google/gemma-3n-e2b-it"
audio_encoder = Gemma3nAudioEncoder.from_pretrained(encoder_id)
feature_extractor = Gemma3nAudioFeatureExtractor.from_pretrained(source_model_id)
device = "cuda" if torch.cuda.is_available() else "cpu"
audio_encoder.to(device)
audio_encoder.eval()
waveform, sampling_rate = sf.read("/path/to/your_audio_file.wav")
inputs = feature_extractor(
[waveform],
sampling_rate=sampling_rate,
return_tensors="pt"
)
audio_mel = inputs["input_features"].to(device)
audio_mel_mask = (inputs["input_features_mask"] == 0).to(device)
with torch.inference_mode():
audio_encodings, output_mask = audio_encoder(
audio_mel=audio_mel,
audio_mel_mask=audio_mel_mask
)
print(audio_encodings.shape) # torch.Size([1, 18, 1536])
print(audio_encodings[0, :5, :10])
# tensor([[ 0.0014, -0.0044, 0.0003, 0.0084, -0.0076, -0.0194, 0.0071, 0.0160,
# 0.0137, 0.0146],
# [-0.0153, 0.0051, 0.0111, -0.0134, -0.0032, -0.0134, 0.0112, -0.0163,
# 0.0050, 0.0036],
# [ 0.0003, -0.0022, 0.0164, -0.0090, -0.0033, -0.0043, 0.0030, -0.0042,
# -0.0060, 0.0066],
# [-0.0006, -0.0194, -0.0006, -0.0097, -0.0049, -0.0132, 0.0012, 0.0175,
# -0.0242, -0.0091],
# [ 0.0127, 0.0122, 0.0125, 0.0277, 0.0116, 0.0152, 0.0142, -0.0099,
# -0.0080, -0.0233]], device='cuda:0')
```
## Model Architecture
```
Gemma3nAudioEncoder(
(subsample_conv_projection): Gemma3nAudioSubSampleConvProjection(
(conv_0): Gemma3nAudioSSCPConvBlock(
(conv): Conv2d(1, 128, kernel_size=(3, 3), stride=(2, 2), bias=False)
(norm): Gemma3nAudioCumulativeGroupNorm()
(activation): ReLU()
)
(conv_1): Gemma3nAudioSSCPConvBlock(
(conv): Conv2d(128, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
(norm): Gemma3nAudioCumulativeGroupNorm()
(activation): ReLU()
)
(input_proj_linear): Linear(in_features=1024, out_features=1536, bias=False)
)
(conformer): ModuleList(
(0-11): 12 x Gemma3nAudioConformerBlock(
(ffw_layer_start): Gemma3nAudioConformerFeedForward(
(pre_layer_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
(ffw_layer_1): Linear(in_features=1536, out_features=6144, bias=False)
(ffw_layer_2): Linear(in_features=6144, out_features=1536, bias=False)
(post_layer_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
)
(attention): Gemma3nAudioConformerAttention(
(pre_attn_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
(attn): Gemma3nAudioAttention(
(relative_position_embedding): Gemma3nAudioRelativePositionEmbedding(
(pos_proj): Linear(in_features=1536, out_features=1536, bias=False)
)
(q_proj): Linear(in_features=1536, out_features=1536, bias=False)
(k_proj): Linear(in_features=1536, out_features=1536, bias=False)
(v_proj): Linear(in_features=1536, out_features=1536, bias=False)
)
(post): Linear(in_features=1536, out_features=1536, bias=False)
(post_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
)
(lconv1d): Gemma3nAudioConformerLightConv1d(
(pre_layer_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
(linear_start): Linear(in_features=1536, out_features=3072, bias=False)
(depthwise_conv1d): Conv1d(1536, 1536, kernel_size=(5,), stride=(1,), groups=1536, bias=False)
(conv_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
(linear_end): Linear(in_features=1536, out_features=1536, bias=False)
)
(ffw_layer_end): Gemma3nAudioConformerFeedForward(
(pre_layer_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
(ffw_layer_1): Linear(in_features=1536, out_features=6144, bias=False)
(ffw_layer_2): Linear(in_features=6144, out_features=1536, bias=False)
(post_layer_norm): Gemma3nRMSNorm((1536,), eps=1e-06)
)
(norm): Gemma3nRMSNorm((1536,), eps=1e-06)
)
)
)
```
|
pimplefeet/omega_n7mePND
|
pimplefeet
| 2025-08-12T14:19:32Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T14:19:32Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tootshine/omega_Gb4Y1xk
|
tootshine
| 2025-08-12T14:19:27Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T14:19:27Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
pimplefeet/omega_kyo8HeP
|
pimplefeet
| 2025-08-12T14:19:25Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T14:19:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
roachkins/omega_1rlAfJA
|
roachkins
| 2025-08-12T14:19:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T14:19:20Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
diogo-ferreira-2002/test_model
|
diogo-ferreira-2002
| 2025-08-12T14:18:23Z | 0 | 0 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2025-08-12T14:08:49Z |
---
license: bsd-3-clause
---
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755006517
|
calegpedia
| 2025-08-12T14:17:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:17:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vengky/blockassist-bc-wild_gentle_manatee_1755005976
|
vengky
| 2025-08-12T14:11:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild gentle manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:11:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild gentle manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stickman0000/whisper-large-v2-nan-100steps
|
stickman0000
| 2025-08-12T14:11:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T14:10:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taeuk1/codebert-juliet-v1
|
taeuk1
| 2025-08-12T14:10:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T14:10:08Z |
---
library_name: transformers
base_model: microsoft/codebert-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: codebert-juliet-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-juliet-v1
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Roc Auc: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.19.1
|
hasinur95/blockassist-bc-solitary_flightless_gorilla_1755007746
|
hasinur95
| 2025-08-12T14:10:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary flightless gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T14:09:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary flightless gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Tobivictor/dialogpt-finetuned-cancer
|
Tobivictor
| 2025-08-12T14:06:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-12T14:06:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mkiani/gemma-3-reasoning
|
mkiani
| 2025-08-12T14:05:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T14:05:29Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mkiani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
silentember/Lantern_Sbn47A
|
silentember
| 2025-08-12T14:04:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T14:02:42Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Userb1az/gpt-oss-120b-GGUF
|
Userb1az
| 2025-08-12T14:04:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-12T11:17:17Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-120b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-120b
ollama pull gpt-oss:120b
ollama run gpt-oss:120b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-120b
lms get openai/gpt-oss-120b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-120b
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
|
mahmoudOmar03/writing_task2
|
mahmoudOmar03
| 2025-08-12T14:03:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T14:03:09Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mahmoudOmar03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WenFengg/cold14_l4_v1_plus_12_8
|
WenFengg
| 2025-08-12T14:02:10Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T09:23:40Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
FatimahEmadEldin/Open-Track-Document-Bassline-Readability-Arabertv2-d3tok-reg
|
FatimahEmadEldin
| 2025-08-12T14:01:58Z | 0 | 0 | null |
[
"safetensors",
"bert",
"ar",
"dataset:CAMeL-Lab/BAREC-Shared-Task-2025-sent",
"base_model:CAMeL-Lab/readability-arabertv2-d3tok-reg",
"base_model:finetune:CAMeL-Lab/readability-arabertv2-d3tok-reg",
"region:us"
] | null | 2025-08-12T13:54:25Z |
---
datasets:
- CAMeL-Lab/BAREC-Shared-Task-2025-sent
language:
- ar
base_model:
- aubmindlab/bert-base-arabertv2
- CAMeL-Lab/readability-arabertv2-d3tok-reg
---
# MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen
<p align="center">
<img src="https://placehold.co/800x200/dbeafe/3b82f6?text=Barec-Readability-Assessment" alt="Barec Readability Assessment">
</p>
This repository contains the official models and results for **MorphoArabia**, the submission to the **[BAREC 2025 Shared Task](https://www.google.com/search?q=https://sites.google.com/view/barec-2025/home)** on Arabic Readability Assessment.
#### By: [Fatimah Mohamed Emad Elden](https://scholar.google.com/citations?user=CfX6eA8AAAAJ&hl=ar)
#### *Cairo University*
[](https://arxiv.org/abs/25XX.XXXXX)
[](https://github.com/astral-fate/barec-Arabic-Readability-Assessment)
[](https://huggingface.co/collections/FatimahEmadEldin/barec-shared-task-2025-689195853f581b9a60f9bd6c)
[](https://github.com/astral-fate/mentalqa2025/blob/main/LICENSE)
---
## Model Description
This project introduces a **morphologically-aware approach** for assessing the readability of Arabic text. The system is built around a fine-tuned regression model designed to process morphologically analyzed text. For the **Constrained** and **Open** tracks of the shared task, this core model is extended into a hybrid architecture that incorporates seven engineered lexical features.
A key element of this system is its deep morphological preprocessing pipeline, which uses the **CAMEL Tools d3tok analyzer**. This allows the model to capture linguistic complexities that are often missed by surface-level tokenization methods. This approach proved to be highly effective, achieving a peak **Quadratic Weighted Kappa (QWK) score of 84.2** on the strict sentence-level test set.
The model predicts a readability score on a **19-level scale**, from 1 (easiest) to 19 (hardest), for a given Arabic sentence or document.
-----
## 🚀 How to Use
You can use the fine-tuned models directly with the `transformers` library pipeline for `text-regression`. The following example uses the best-performing model from the **Strict** track.
```python
from transformers import pipeline
# Load the regression pipeline
# This model is the best performer for the Strict track
# It's also the base model for the other tracks.
regressor = pipeline(
"text-regression",
model="FatimahEmadEldin/MorphoArabia-CAMEL-BERT-BAREC-Strict-Sentence"
)
# Example sentence in Arabic
sentence = "أليست هذه العاطفة التي نخافها ونرتجف لمرورها في صدورنا جزءا من الناموس الكلي"
# (Translation: "Isn't this emotion, which we fear and tremble at its passing in the chests, a part of the universal law?")
# Get the readability score
results = regressor(sentence)
# The output is a score between 1 and 19
predicted_score = results[0]['score']
print(f"Sentence: {sentence}")
print(f"Predicted Readability Score: {predicted_score:.2f}")
```
-----
## ⚙️ Training Procedure
The system employs two distinct architectures based on the track's constraints:
* **Strict Track**: This track uses a base regression model, `CAMeL-Lab/readability-arabertv2-d3tok-reg`, fine-tuned directly on the BAREC dataset.
* **Constrained and Open Tracks**: These tracks utilize a hybrid model. This architecture combines the deep contextual understanding of the Transformer with explicit numerical features. The final representation for a sentence is created by concatenating the Transformer's `[CLS]` token embedding with a 7-dimensional vector of engineered lexical features derived from the SAMER lexicon.
A critical component of the system is its preprocessing pipeline, which leverages the CAMEL Tools `d3tok` format. The `d3tok` analyzer performs a deep morphological analysis by disambiguating words in context and then segmenting them into their constituent morphemes.
### Frameworks
* PyTorch
* Hugging Face Transformers
-----
### 📊 Evaluation Results
The models were evaluated on the blind test set provided by the BAREC organizers. The primary metric for evaluation is the **Quadratic Weighted Kappa (QWK)**, which penalizes larger disagreements more severely.
#### Final Test Set Scores (QWK)
| Track | Task | Dev (QWK) | Test (QWK) |
| :--- | :--- | :---: | :---: |
| **Strict** | Sentence | 0.823 | **84.2** |
| | Document | 0.823\* | 79.9 |
| **Constrained** | Sentence | 0.810 | 82.9 |
| | Document | 0.835\* | 75.5 |
| **Open** | Sentence | 0.827 | 83.6 |
| | Document | 0.827\* | **79.2** |
\*Document-level dev scores are based on the performance of the sentence-level model on the validation set.
-----
## 📜 Citation
If you use the work, please cite the paper:
```
@inproceedings{eldin2025morphoarabia,
title={{MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen}},
author={Eldin, Fatimah Mohamed Emad},
year={2025},
booktitle={Proceedings of the BAREC 2025 Shared Task},
eprint={25XX.XXXXX},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF
|
mradermacher
| 2025-08-12T14:00:08Z | 708 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:LLM-Compe-2025-Camino/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO",
"base_model:quantized:LLM-Compe-2025-Camino/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T01:58:09Z |
---
base_model: LLM-Compe-2025-Camino/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/LLM-Compe-2025-Camino/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO-GGUF/resolve/main/Phi-4-reasoning-plus_Nemotron-Reasoning-0803_SFT-GRPO.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JorisCos/q-Taxi-v3
|
JorisCos
| 2025-08-12T13:58:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-12T13:58:39Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JorisCos/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mulliken/cwe-predictor
|
mulliken
| 2025-08-12T13:56:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"security",
"cyber-security",
"CWE",
"vulnerability-classification",
"cve",
"en",
"dataset:zefang-liu/cve-and-cwe-mapping-dataset",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T12:12:59Z |
---
library_name: transformers
tags:
- security
- cyber-security
- CWE
- vulnerability-classification
- cve
license: apache-2.0
datasets:
- zefang-liu/cve-and-cwe-mapping-dataset
language:
- en
metrics:
- accuracy
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
model-index:
- name: cwe-predictor
results:
- task:
type: text-classification
name: CWE Classification
metrics:
- type: accuracy
value: 0.727207
name: Validation Accuracy
- type: f1
value: 0.251264
name: Macro F1 Score
---
# CWE Predictor - Vulnerability Classification Model
This model classifies vulnerability descriptions into Common Weakness Enumeration (CWE) categories. It's designed to help security professionals and developers quickly identify the type of vulnerability based on textual descriptions.
## Model Details
### Model Description
This is a fine-tuned DistilBERT model that predicts CWE (Common Weakness Enumeration) categories from vulnerability descriptions. The model was trained on a comprehensive dataset of CVE descriptions mapped to their corresponding CWE identifiers.
**Key Features:**
- Classifies vulnerabilities into 232 distinct CWE categories
- Trained on 111,640 vulnerability descriptions
- Achieves 72.72% accuracy on validation set
- Macro F1 score of 0.251 demonstrating balanced performance across classes
- Lightweight and fast inference using DistilBERT architecture
- **Developed by:** [mulliken](https://huggingface.co/mulliken)
- **Model type:** DistilBERT (Transformer-based classifier)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Hugging Face Model:** [mulliken/cwe-predictor](https://huggingface.co/mulliken/cwe-predictor)
- **Dataset:** [CVE and CWE Mapping Dataset](https://huggingface.co/datasets/zefang-liu/cve-and-cwe-mapping-dataset)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This model can be used directly for:
- **Vulnerability Triage:** Automatically classify security vulnerabilities reported in bug bounty programs or security audits
- **Security Analysis:** Categorize CVE descriptions to understand vulnerability patterns
- **Automated Security Reporting:** Generate CWE classifications for vulnerability reports
- **Security Research:** Analyze trends in vulnerability types across codebases
### Downstream Use
The model can be integrated into:
- Security scanning tools and SAST/DAST platforms
- Vulnerability management systems
- Security information and event management (SIEM) systems
- DevSecOps pipelines for automated vulnerability classification
### Out-of-Scope Use
This model should NOT be used for:
- Medical or safety-critical systems without additional validation
- As the sole method for security assessment (should complement human expertise)
- Classifying non-English vulnerability descriptions
- Real-time security detection (model is designed for post-discovery classification)
## Bias, Risks, and Limitations
### Known Limitations
- **Class Imbalance:** Some CWE categories are underrepresented in the training data, which may lead to lower accuracy for rare vulnerability types
- **Temporal Bias:** Model trained on historical CVE data may not recognize newer vulnerability patterns
- **Language Limitation:** Only trained on English descriptions
- **Context Loss:** Limited to 512 tokens, longer descriptions are truncated
### Risks
- False negatives could lead to unidentified security vulnerabilities
- Should not replace human security expertise
- May not generalize well to proprietary or domain-specific vulnerability descriptions
### Recommendations
- Always use this model as a supplementary tool alongside human security expertise
- Validate predictions for critical security decisions
- Consider retraining or fine-tuning for domain-specific applications
- Monitor model performance over time as new vulnerability types emerge
## How to Get Started with the Model
### Installation
```bash
pip install transformers torch
```
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("mulliken/cwe-predictor")
tokenizer = AutoTokenizer.from_pretrained("mulliken/cwe-predictor")
# Prediction function
def predict_cwe(text: str) -> str:
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
logits = model(**encoded).logits
pred_id = torch.argmax(logits, dim=-1).item()
return model.config.id2label[pred_id]
# Example usage
vuln_description = "Buffer overflow in the authentication module allows remote attackers to execute arbitrary code."
cwe_prediction = predict_cwe(vuln_description)
print(f"Predicted CWE: {cwe_prediction}")
```
### Example Predictions
```python
examples = [
"SQL injection vulnerability in login form allows attackers to bypass authentication",
"Cross-site scripting (XSS) vulnerability in comment section",
"Path traversal vulnerability allows reading arbitrary files",
"Integer overflow in image processing library causes memory corruption"
]
for desc in examples:
print(f"Description: {desc}")
print(f"Predicted CWE: {predict_cwe(desc)}\n")
```
## Training Details
### Training Data
The model was trained on the [CVE and CWE Mapping Dataset](https://huggingface.co/datasets/zefang-liu/cve-and-cwe-mapping-dataset), which contains:
- CVE descriptions from the National Vulnerability Database (NVD)
- Corresponding CWE classifications
- Dataset size: 124,045 examples after filtering
- Training set: 111,640 examples
- Validation set: 12,405 examples
- Number of CWE classes: 232 (after removing generic categories like "NVD-CWE-Other" and "NVD-CWE-noinfo")
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
1. **Data Cleaning:**
- Removed entries with missing descriptions or CWE IDs
- Filtered out generic CWE categories ("NVD-CWE-Other", "NVD-CWE-noinfo")
- Removed CWE categories with only 1 example to ensure stratified splitting
2. **Tokenization:**
- Used DistilBERT tokenizer with max_length=512
- Applied truncation for longer descriptions
#### Training Hyperparameters
- **Learning rate:** 2e-5
- **Batch size:** 2 per device with gradient accumulation of 8 (effective batch size: 16)
- **Number of epochs:** 1
- **Weight decay:** 0.01
- **Optimizer:** AdamW
- **Training regime:** fp32 with gradient checkpointing
- **Evaluation strategy:** Every 1000 steps
#### Training Performance
- **Total training time:** ~78 minutes (4712 seconds) (per epoch)
- **Training steps:** 13,956
- **Training samples per second:** 23.691
- **Final training loss:** 1.134700
- **Best validation loss:** 1.082806 (at step 6000)
- **Model size:** ~268MB
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
Validation set of 12,405 examples (10% stratified split from the training data)
#### Metrics
- **Accuracy:** Overall correctness of predictions
- **Macro F1 Score:** Unweighted mean of F1 scores for each class (ensures balanced performance across all CWE types)
### Results
| Step | Training Loss | Validation Loss | Accuracy | Macro F1 |
|------|--------------|-----------------|----------|----------|
| 1000 | 1.044600 | 1.252940 | 0.704716 | 0.220344 |
| 2000 | 1.158700 | 1.188677 | 0.711326 | 0.229855 |
| 3000 | 1.119900 | 1.159229 | 0.719226 | 0.235295 |
| 4000 | 1.112600 | 1.119924 | 0.720193 | 0.242404 |
| 5000 | 1.110300 | 1.111053 | 0.722934 | 0.244389 |
| 6000 | 1.134700 | 1.082806 | 0.727207 | 0.251264 |
#### Summary
The model achieves 72.72% accuracy on the validation set with a macro F1 score of 0.251. The relatively lower F1 score reflects the challenge of classifying across 232 different CWE categories with varying representation in the dataset.
## Model Examination
The model uses standard DistilBERT attention mechanisms to process vulnerability descriptions. Key observations:
- The model learns to identify security-related keywords and patterns
- Attention weights typically focus on vulnerability-specific terms (e.g., "overflow", "injection", "traversal")
- Performance varies by CWE category based on training data representation
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Apple Silicon (M-series chip)
- **Hours used:** ~1.3 hours
- **Cloud Provider:** Local training (no cloud provider)
- **Compute Region:** N/A (local)
- **Carbon Emitted:** Minimal (Apple Silicon is energy efficient, ~15W TDP)
## Technical Specifications [optional]
### Model Architecture and Objective
- **Base Architecture:** DistilBERT (distilbert-base-uncased)
- **Task:** Multi-class text classification
- **Number of labels:** 232 CWE categories
- **Objective:** Cross-entropy loss for sequence classification
- **Architecture modifications:** Added classification head with 232 output classes
### Compute Infrastructure
Local machine with Apple Silicon processor
#### Hardware
- **Device:** Apple Silicon (MPS backend)
- **Memory management:** PYTORCH_MPS_HIGH_WATERMARK_RATIO set to 0.0
#### Software
- **Framework:** PyTorch with Hugging Face Transformers
- **Python version:** 3.x
- **Key libraries:** transformers, torch, datasets, scikit-learn, pandas, numpy
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{mulliken2024cwepredictcr,
author = {mulliken},
title = {CWE Predictor: A DistilBERT Model for Vulnerability Classification},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/mulliken/cwe-predictor}}
}
```
## Glossary
- **CWE (Common Weakness Enumeration):** A community-developed list of software and hardware weakness types
- **CVE (Common Vulnerabilities and Exposures):** A list of publicly disclosed cybersecurity vulnerabilities
- **NVD (National Vulnerability Database):** U.S. government repository of vulnerability management data
- **Macro F1:** The unweighted mean of F1 scores calculated for each class independently
- **SAST/DAST:** Static/Dynamic Application Security Testing
## More Information
For questions, issues, or contributions, please visit the [Hugging Face model page](https://huggingface.co/mulliken/cwe-predictor).
## Model Card Authors
- [mulliken](https://huggingface.co/mulliken)
## Model Card Contact
Please use the Hugging Face model repository's discussion section for questions and feedback: [mulliken/cwe-predictor](https://huggingface.co/mulliken/cwe-predictor/discussions)
|
rubennode/blockassist-bc-tall_foraging_chicken_1755006872
|
rubennode
| 2025-08-12T13:55:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall foraging chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:55:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall foraging chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aiface/vi-bert-base_v1
|
aiface
| 2025-08-12T13:55:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:FPTAI/vibert-base-cased",
"base_model:finetune:FPTAI/vibert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T13:34:58Z |
---
library_name: transformers
base_model: FPTAI/vibert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-bert-base_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-bert-base_v1
This model is a fine-tuned version of [FPTAI/vibert-base-cased](https://huggingface.co/FPTAI/vibert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4995
- Accuracy: 0.9292
- Precision Macro: 0.8368
- Recall Macro: 0.7769
- F1 Macro: 0.8000
- F1 Weighted: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 0.5604 | 1.0 | 90 | 0.2596 | 0.9128 | 0.9000 | 0.6666 | 0.6788 | 0.8972 |
| 0.2258 | 2.0 | 180 | 0.2182 | 0.9286 | 0.8216 | 0.8017 | 0.8109 | 0.9275 |
| 0.1532 | 3.0 | 270 | 0.2312 | 0.9198 | 0.7940 | 0.7902 | 0.7919 | 0.9195 |
| 0.123 | 4.0 | 360 | 0.2432 | 0.9311 | 0.8607 | 0.8000 | 0.8238 | 0.9286 |
| 0.0785 | 5.0 | 450 | 0.2592 | 0.9255 | 0.8450 | 0.7784 | 0.8037 | 0.9222 |
| 0.0628 | 6.0 | 540 | 0.3075 | 0.9280 | 0.8358 | 0.7765 | 0.7993 | 0.9247 |
| 0.0457 | 7.0 | 630 | 0.3155 | 0.9255 | 0.8118 | 0.7996 | 0.8053 | 0.9247 |
| 0.034 | 8.0 | 720 | 0.3924 | 0.9248 | 0.8212 | 0.7656 | 0.7870 | 0.9213 |
| 0.0271 | 9.0 | 810 | 0.3776 | 0.9242 | 0.8211 | 0.7782 | 0.7957 | 0.9216 |
| 0.0207 | 10.0 | 900 | 0.4209 | 0.9274 | 0.8067 | 0.8094 | 0.8080 | 0.9275 |
| 0.0189 | 11.0 | 990 | 0.4373 | 0.9255 | 0.7988 | 0.7957 | 0.7971 | 0.9252 |
| 0.0145 | 12.0 | 1080 | 0.4010 | 0.9349 | 0.8392 | 0.8228 | 0.8304 | 0.9341 |
| 0.0083 | 13.0 | 1170 | 0.4337 | 0.9242 | 0.8237 | 0.7988 | 0.8100 | 0.9228 |
| 0.004 | 14.0 | 1260 | 0.4571 | 0.9318 | 0.8491 | 0.7828 | 0.8080 | 0.9285 |
| 0.0081 | 15.0 | 1350 | 0.4862 | 0.9286 | 0.8298 | 0.7857 | 0.8035 | 0.9261 |
| 0.0027 | 16.0 | 1440 | 0.4788 | 0.9280 | 0.8348 | 0.7924 | 0.8103 | 0.9258 |
| 0.0029 | 17.0 | 1530 | 0.4797 | 0.9305 | 0.8339 | 0.7903 | 0.8085 | 0.9281 |
| 0.003 | 18.0 | 1620 | 0.4877 | 0.9280 | 0.8238 | 0.7807 | 0.7984 | 0.9253 |
| 0.0013 | 19.0 | 1710 | 0.4966 | 0.9286 | 0.8363 | 0.7765 | 0.7996 | 0.9253 |
| 0.0014 | 20.0 | 1800 | 0.4995 | 0.9292 | 0.8368 | 0.7769 | 0.8000 | 0.9259 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Dombili2038/blockassist-bc-jumping_beaked_hamster_1755006853
|
Dombili2038
| 2025-08-12T13:55:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping beaked hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:55:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping beaked hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-scavenging_placid_goat_1755005194
|
motza0025
| 2025-08-12T13:54:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scavenging placid goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:54:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scavenging placid goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FatimahEmadEldin/Open-Track-Sentence-Bassline-Readability-Arabertv2-d3tok-reg
|
FatimahEmadEldin
| 2025-08-12T13:51:21Z | 2 | 0 | null |
[
"safetensors",
"bert",
"ar",
"dataset:CAMeL-Lab/BAREC-Shared-Task-2025-sent",
"base_model:CAMeL-Lab/readability-arabertv2-d3tok-reg",
"base_model:finetune:CAMeL-Lab/readability-arabertv2-d3tok-reg",
"region:us"
] | null | 2025-08-04T18:55:03Z |
---
datasets:
- CAMeL-Lab/BAREC-Shared-Task-2025-sent
language:
- ar
base_model:
- aubmindlab/bert-base-arabertv2
- CAMeL-Lab/readability-arabertv2-d3tok-reg
---
# MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen
<p align="center">
<img src="https://placehold.co/800x200/dbeafe/3b82f6?text=Barec-Readability-Assessment" alt="Barec Readability Assessment">
</p>
This repository contains the official models and results for **MorphoArabia**, the submission to the **[BAREC 2025 Shared Task](https://www.google.com/search?q=https://sites.google.com/view/barec-2025/home)** on Arabic Readability Assessment.
#### By: [Fatimah Mohamed Emad Elden](https://scholar.google.com/citations?user=CfX6eA8AAAAJ&hl=ar)
#### *Cairo University*
[](https://arxiv.org/abs/25XX.XXXXX)
[](https://github.com/astral-fate/barec-Arabic-Readability-Assessment)
[](https://huggingface.co/collections/FatimahEmadEldin/barec-shared-task-2025-689195853f581b9a60f9bd6c)
[](https://github.com/astral-fate/mentalqa2025/blob/main/LICENSE)
---
## Model Description
This project introduces a **morphologically-aware approach** for assessing the readability of Arabic text. The system is built around a fine-tuned regression model designed to process morphologically analyzed text. For the **Constrained** and **Open** tracks of the shared task, this core model is extended into a hybrid architecture that incorporates seven engineered lexical features.
A key element of this system is its deep morphological preprocessing pipeline, which uses the **CAMEL Tools d3tok analyzer**. This allows the model to capture linguistic complexities that are often missed by surface-level tokenization methods. This approach proved to be highly effective, achieving a peak **Quadratic Weighted Kappa (QWK) score of 84.2** on the strict sentence-level test set.
The model predicts a readability score on a **19-level scale**, from 1 (easiest) to 19 (hardest), for a given Arabic sentence or document.
-----
## 🚀 How to Use
You can use the fine-tuned models directly with the `transformers` library pipeline for `text-regression`. The following example uses the best-performing model from the **Strict** track.
```python
from transformers import pipeline
# Load the regression pipeline
# This model is the best performer for the Strict track
# It's also the base model for the other tracks.
regressor = pipeline(
"text-regression",
model="FatimahEmadEldin/MorphoArabia-CAMEL-BERT-BAREC-Strict-Sentence"
)
# Example sentence in Arabic
sentence = "أليست هذه العاطفة التي نخافها ونرتجف لمرورها في صدورنا جزءا من الناموس الكلي"
# (Translation: "Isn't this emotion, which we fear and tremble at its passing in the chests, a part of the universal law?")
# Get the readability score
results = regressor(sentence)
# The output is a score between 1 and 19
predicted_score = results[0]['score']
print(f"Sentence: {sentence}")
print(f"Predicted Readability Score: {predicted_score:.2f}")
```
-----
## ⚙️ Training Procedure
The system employs two distinct architectures based on the track's constraints:
* **Strict Track**: This track uses a base regression model, `CAMeL-Lab/readability-arabertv2-d3tok-reg`, fine-tuned directly on the BAREC dataset.
* **Constrained and Open Tracks**: These tracks utilize a hybrid model. This architecture combines the deep contextual understanding of the Transformer with explicit numerical features. The final representation for a sentence is created by concatenating the Transformer's `[CLS]` token embedding with a 7-dimensional vector of engineered lexical features derived from the SAMER lexicon.
A critical component of the system is its preprocessing pipeline, which leverages the CAMEL Tools `d3tok` format. The `d3tok` analyzer performs a deep morphological analysis by disambiguating words in context and then segmenting them into their constituent morphemes.
### Frameworks
* PyTorch
* Hugging Face Transformers
-----
### 📊 Evaluation Results
The models were evaluated on the blind test set provided by the BAREC organizers. The primary metric for evaluation is the **Quadratic Weighted Kappa (QWK)**, which penalizes larger disagreements more severely.
#### Final Test Set Scores (QWK)
| Track | Task | Dev (QWK) | Test (QWK) |
| :--- | :--- | :---: | :---: |
| **Strict** | Sentence | 0.823 | **84.2** |
| | Document | 0.823\* | 79.9 |
| **Constrained** | Sentence | 0.810 | 82.9 |
| | Document | 0.835\* | 75.5 |
| **Open** | Sentence | 0.827 | 83.6 |
| | Document | 0.827\* | **79.2** |
\*Document-level dev scores are based on the performance of the sentence-level model on the validation set.
-----
## 📜 Citation
If you use the work, please cite the paper:
```
@inproceedings{eldin2025morphoarabia,
title={{MorphoArabia at BAREC 2025 Shared Task: A Hybrid Architecture with Morphological Analysis for Arabic Readability Assessmen}},
author={Eldin, Fatimah Mohamed Emad},
year={2025},
booktitle={Proceedings of the BAREC 2025 Shared Task},
eprint={25XX.XXXXX},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755005488
|
Sayemahsjn
| 2025-08-12T13:49:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:49:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Chattso-GPT/DeepSeek-R1-Distill-Qwen-32B-for-lean
|
Chattso-GPT
| 2025-08-12T13:46:50Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"mathematics",
"lean",
"theorem-proving",
"reasoning",
"fine-tuned",
"lora",
"text-generation",
"conversational",
"en",
"dataset:custom_lean_mathematics",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T19:45:19Z |
---
license: apache-2.0
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- generated_from_trainer
- mathematics
- lean
- theorem-proving
- reasoning
- fine-tuned
- lora
model-index:
- name: DeepSeek-R1-Distill-Qwen-32B-for-lean
results: []
datasets:
- custom_lean_mathematics
language:
- en
pipeline_tag: text-generation
---
# DeepSeek-R1-Distill-Qwen-32B-for-lean
✅ **Training Status: COMPLETED**
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), specifically optimized for mathematical reasoning and Lean theorem proving tasks.
## 🎯 Model Overview
This model has been fine-tuned using LoRA (Low-Rank Adaptation) technique on a custom dataset containing mathematical problems and Lean proofs, enhancing the base model's capabilities in formal mathematics and theorem proving.
## 📊 Training Results
| Metric | Value |
|--------|--------|
| **Final Training Step** | 4,200 (early stopped from 8,550) |
| **Best Validation Loss** | 0.516 |
| **Training Epochs** | ~1.0 (0.98) |
| **Early Stopping** | ✅ Applied (patience=2) |
| **Model Selection** | Best checkpoint automatically selected |
## 🔧 Training Configuration
### Hyperparameters
- **Learning Rate**: 2e-4
- **Batch Size**: 6 per device
- **Gradient Accumulation**: 1 step
- **Warmup Steps**: 5
- **Weight Decay**: 0.01
- **LR Scheduler**: linear
- **Optimizer**: paged_adamw_8bit
- **Precision**: bfloat16
### LoRA Configuration
- **LoRA Rank**: 32
- **LoRA Alpha**: 32
- **Target Modules**: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
- **Dropout**: 0.05
- **Max Sequence Length**: 4096
### Dataset
- **Size**: 25,650 examples
- **Type**: Mathematical reasoning and Lean theorem proving
- **Preprocessing**: Custom formatting for question-answer pairs
- **Validation Split**: 10%
## 🚀 Usage
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model_id = "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load fine-tuned LoRA adapter
model = PeftModel.from_pretrained(base_model, "Chattso-GPT/DeepSeek-R1-Distill-Qwen-32B-for-lean")
# Generate response
def generate_response(prompt, max_length=512):
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
prompt = "Prove that the sum of two even numbers is even."
response = generate_response(prompt)
print(response)
```
### For Lean Theorem Proving
```python
# Lean-specific prompt format
lean_prompt = """
theorem sum_of_evens_is_even (a b : ℤ) (ha : even a) (hb : even b) : even (a + b) := by
sorry
"""
proof = generate_response(f"Complete this Lean proof:\n{lean_prompt}")
print(proof)
```
## 🎯 Intended Use Cases
- **Mathematical Reasoning**: Solving complex mathematical problems
- **Lean Theorem Proving**: Generating and completing formal proofs
- **Educational Support**: Assisting with mathematics education
- **Research**: Supporting formal verification research
## ⚠️ Limitations
- **Domain Specific**: Optimized primarily for mathematics and Lean
- **LoRA Adaptation**: Requires base model for full functionality
- **Computational Requirements**: Needs significant GPU memory for inference
- **Training Data**: Performance limited to training data distribution
## 📈 Performance Notes
This model achieved optimal performance through early stopping at step 4,200, preventing overfitting while maintaining strong performance on validation data. The relatively low validation loss (0.516) indicates good generalization on mathematical reasoning tasks.
## 🔄 Model Versions
- **Current**: Step 4,200 (Best model with early stopping)
- **Architecture**: DeepSeek-R1-Distill-Qwen-32B + LoRA
- **Format**: PEFT-compatible LoRA adapter
## 📚 Citation
If you use this model in your research, please cite:
```bibtex
@misc{deepseek-r1-distill-lean-2025,
title={DeepSeek-R1-Distill-Qwen-32B-for-lean},
author={Chattso-GPT},
year={2025},
howpublished={\\url{https://huggingface.co/Chattso-GPT/DeepSeek-R1-Distill-Qwen-32B-for-lean}},
}
```
## 📄 License
This model is released under the Apache 2.0 License, following the base model's licensing terms.
## 🤝 Acknowledgments
- Base model: [DeepSeek AI](https://huggingface.co/deepseek-ai) for the foundation model
- Training framework: Unsloth for efficient fine-tuning
- Community: Hugging Face for model hosting and tools
## 📞 Contact
For questions or issues regarding this model, please open an issue in the model repository or contact the model author.
---
*Training completed with early stopping at optimal performance. Model ready for mathematical reasoning and Lean theorem proving tasks.*
|
xiulinyang/fox_no_rope
|
xiulinyang
| 2025-08-12T13:46:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"forgetting_transformer-project_fox",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-12T13:46:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultramit/blockassist-bc-sturdy_mute_opossum_1755006363
|
ultramit
| 2025-08-12T13:46:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy mute opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:46:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy mute opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755004670
|
mang3dd
| 2025-08-12T13:42:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:42:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
relapseone/blockassist-bc-insectivorous_prickly_shrew_1755004158
|
relapseone
| 2025-08-12T13:40:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous prickly shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:40:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous prickly shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755004369
|
indoempatnol
| 2025-08-12T13:38:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:38:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vohuutridung/vit5-base-20e5h-cposimpo
|
vohuutridung
| 2025-08-12T13:34:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T13:33:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhtien2405/vovinam-wav2vec2-base-vi-160h-finetuned
|
minhtien2405
| 2025-08-12T13:31:01Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"speech-recognition",
"vietnamese",
"vietnam",
"voviai",
"vovinam",
"generated_from_trainer",
"vi",
"base_model:minhtien2405/wav2vec2-base-vi-160h-finetuned",
"base_model:finetune:minhtien2405/wav2vec2-base-vi-160h-finetuned",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-08T07:45:19Z |
---
library_name: transformers
language:
- vi
license: cc-by-nc-4.0
base_model: minhtien2405/wav2vec2-base-vi-160h-finetuned
tags:
- speech-recognition
- vietnamese
- vietnam
- voviai
- vovinam
- generated_from_trainer
metrics:
- wer
model-index:
- name: vovinam-wav2vec2-base-vi-160h-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vovinam-wav2vec2-base-vi-160h-finetuned
This model is a fine-tuned version of [minhtien2405/wav2vec2-base-vi-160h-finetuned](https://huggingface.co/minhtien2405/wav2vec2-base-vi-160h-finetuned) on the minhtien2405/VoviAIDataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0769
- Wer: 0.1038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 0.7551 | 0.2413 | 100 | 0.6147 | 0.4234 |
| 0.6417 | 0.4825 | 200 | 0.5227 | 0.3592 |
| 0.5884 | 0.7238 | 300 | 0.5170 | 0.3392 |
| 0.5581 | 0.9650 | 400 | 0.4353 | 0.3093 |
| 0.4546 | 1.2051 | 500 | 0.4524 | 0.3003 |
| 0.4408 | 1.4463 | 600 | 0.3799 | 0.2603 |
| 0.4138 | 1.6876 | 700 | 0.3509 | 0.2563 |
| 0.3814 | 1.9288 | 800 | 0.3370 | 0.2446 |
| 0.315 | 2.1689 | 900 | 0.3238 | 0.2382 |
| 0.3098 | 2.4101 | 1000 | 0.3128 | 0.2362 |
| 0.2909 | 2.6514 | 1100 | 0.2790 | 0.2216 |
| 0.2749 | 2.8926 | 1200 | 0.2975 | 0.2310 |
| 0.2383 | 3.1327 | 1300 | 0.2833 | 0.2194 |
| 0.2415 | 3.3739 | 1400 | 0.2544 | 0.2067 |
| 0.2937 | 3.6152 | 1500 | 0.2833 | 0.2068 |
| 0.2418 | 3.8565 | 1600 | 0.2668 | 0.2059 |
| 0.2103 | 4.0965 | 1700 | 0.2629 | 0.1922 |
| 0.2136 | 4.3378 | 1800 | 0.2574 | 0.1983 |
| 0.1965 | 4.5790 | 1900 | 0.2383 | 0.1869 |
| 0.2095 | 4.8203 | 2000 | 0.2198 | 0.1777 |
| 0.1759 | 5.0603 | 2100 | 0.2180 | 0.1884 |
| 0.1703 | 5.3016 | 2200 | 0.2188 | 0.1844 |
| 0.1637 | 5.5428 | 2300 | 0.2333 | 0.1820 |
| 0.1638 | 5.7841 | 2400 | 0.2206 | 0.1698 |
| 0.1583 | 6.0241 | 2500 | 0.2155 | 0.1795 |
| 0.1461 | 6.2654 | 2600 | 0.2005 | 0.1821 |
| 0.1478 | 6.5066 | 2700 | 0.2179 | 0.1789 |
| 0.1491 | 6.7479 | 2800 | 0.2006 | 0.1731 |
| 0.151 | 6.9891 | 2900 | 0.2171 | 0.1809 |
| 0.1416 | 7.2292 | 3000 | 0.2006 | 0.1700 |
| 0.1355 | 7.4704 | 3100 | 0.1743 | 0.1623 |
| 0.1236 | 7.7117 | 3200 | 0.1886 | 0.1612 |
| 0.1355 | 7.9530 | 3300 | 0.1778 | 0.1630 |
| 0.1198 | 8.1930 | 3400 | 0.2032 | 0.1606 |
| 0.124 | 8.4343 | 3500 | 0.1812 | 0.1568 |
| 0.1191 | 8.6755 | 3600 | 0.1854 | 0.1642 |
| 0.1119 | 8.9168 | 3700 | 0.2193 | 0.1616 |
| 0.1101 | 9.1568 | 3800 | 0.2105 | 0.1630 |
| 0.1213 | 9.3981 | 3900 | 0.1639 | 0.1522 |
| 0.1072 | 9.6393 | 4000 | 0.1769 | 0.1543 |
| 0.1109 | 9.8806 | 4100 | 0.1960 | 0.1592 |
| 0.0929 | 10.1206 | 4200 | 0.1611 | 0.1567 |
| 0.1003 | 10.3619 | 4300 | 0.1663 | 0.1540 |
| 0.1074 | 10.6031 | 4400 | 0.2051 | 0.1692 |
| 0.1119 | 10.8444 | 4500 | 0.1717 | 0.1552 |
| 0.1068 | 11.0844 | 4600 | 0.1872 | 0.1453 |
| 0.0953 | 11.3257 | 4700 | 0.1757 | 0.1453 |
| 0.0985 | 11.5669 | 4800 | 0.1818 | 0.1496 |
| 0.0998 | 11.8082 | 4900 | 0.1950 | 0.1509 |
| 0.0952 | 12.0483 | 5000 | 0.1489 | 0.1412 |
| 0.0805 | 12.2895 | 5100 | 0.1755 | 0.1409 |
| 0.0837 | 12.5308 | 5200 | 0.1600 | 0.1413 |
| 0.0826 | 12.7720 | 5300 | 0.1562 | 0.1434 |
| 0.0958 | 13.0121 | 5400 | 0.1608 | 0.1399 |
| 0.0782 | 13.2533 | 5500 | 0.1510 | 0.1363 |
| 0.0702 | 13.4946 | 5600 | 0.1631 | 0.1419 |
| 0.0811 | 13.7358 | 5700 | 0.1578 | 0.1436 |
| 0.0823 | 13.9771 | 5800 | 0.1866 | 0.1490 |
| 0.0929 | 14.2171 | 5900 | 0.1579 | 0.1438 |
| 0.0734 | 14.4584 | 6000 | 0.1671 | 0.1466 |
| 0.0697 | 14.6996 | 6100 | 0.1548 | 0.1398 |
| 0.078 | 14.9409 | 6200 | 0.1715 | 0.1448 |
| 0.0633 | 15.1809 | 6300 | 0.1519 | 0.1387 |
| 0.0661 | 15.4222 | 6400 | 0.1462 | 0.1375 |
| 0.0782 | 15.6634 | 6500 | 0.1539 | 0.1432 |
| 0.0724 | 15.9047 | 6600 | 0.1492 | 0.1383 |
| 0.0636 | 16.1448 | 6700 | 0.1513 | 0.1366 |
| 0.0576 | 16.3860 | 6800 | 0.1428 | 0.1372 |
| 0.0669 | 16.6273 | 6900 | 0.1470 | 0.1327 |
| 0.0549 | 16.8685 | 7000 | 0.1546 | 0.1321 |
| 0.0608 | 17.1086 | 7100 | 0.1632 | 0.1389 |
| 0.0598 | 17.3498 | 7200 | 0.1451 | 0.1350 |
| 0.0589 | 17.5911 | 7300 | 0.1752 | 0.1443 |
| 0.0595 | 17.8323 | 7400 | 0.1446 | 0.1375 |
| 0.0627 | 18.0724 | 7500 | 0.1583 | 0.1351 |
| 0.0625 | 18.3136 | 7600 | 0.1230 | 0.1329 |
| 0.0572 | 18.5549 | 7700 | 0.1376 | 0.1333 |
| 0.0581 | 18.7961 | 7800 | 0.1493 | 0.1351 |
| 0.0532 | 19.0362 | 7900 | 0.1521 | 0.1291 |
| 0.051 | 19.2774 | 8000 | 0.1488 | 0.1288 |
| 0.0484 | 19.5187 | 8100 | 0.1726 | 0.1419 |
| 0.0538 | 19.7600 | 8200 | 0.1431 | 0.1301 |
| 0.0531 | 20.0 | 8300 | 0.1652 | 0.1449 |
| 0.0519 | 20.2413 | 8400 | 0.1412 | 0.1338 |
| 0.0532 | 20.4825 | 8500 | 0.1352 | 0.1299 |
| 0.1488 | 20.7238 | 8600 | 0.1227 | 0.1274 |
| 0.057 | 20.9650 | 8700 | 0.1184 | 0.1257 |
| 0.0578 | 21.2051 | 8800 | 0.1349 | 0.1322 |
| 0.0561 | 21.4463 | 8900 | 0.1318 | 0.1332 |
| 0.0425 | 21.6876 | 9000 | 0.1089 | 0.1212 |
| 0.0449 | 21.9288 | 9100 | 0.1254 | 0.1240 |
| 0.0344 | 22.1689 | 9200 | 0.1309 | 0.1265 |
| 0.0416 | 22.4101 | 9300 | 0.1287 | 0.1221 |
| 0.0399 | 22.6514 | 9400 | 0.1206 | 0.1275 |
| 0.045 | 22.8926 | 9500 | 0.1187 | 0.1237 |
| 0.046 | 23.1327 | 9600 | 0.1289 | 0.1255 |
| 0.0409 | 23.3739 | 9700 | 0.1276 | 0.1241 |
| 0.0394 | 23.6152 | 9800 | 0.1127 | 0.1270 |
| 0.0419 | 23.8565 | 9900 | 0.1043 | 0.1163 |
| 0.0388 | 24.0965 | 10000 | 0.1357 | 0.1274 |
| 0.0392 | 24.3378 | 10100 | 0.1453 | 0.1308 |
| 0.0349 | 24.5790 | 10200 | 0.1138 | 0.1198 |
| 0.0327 | 24.8203 | 10300 | 0.1578 | 0.1238 |
| 0.0294 | 25.0603 | 10400 | 0.1264 | 0.1194 |
| 0.0348 | 25.3016 | 10500 | 0.1187 | 0.1222 |
| 0.0405 | 25.5428 | 10600 | 0.1282 | 0.1202 |
| 0.0325 | 25.7841 | 10700 | 0.1249 | 0.1218 |
| 0.0295 | 26.0241 | 10800 | 0.1238 | 0.1183 |
| 0.0299 | 26.2654 | 10900 | 0.1371 | 0.1219 |
| 0.0389 | 26.5066 | 11000 | 0.1037 | 0.1165 |
| 0.0295 | 26.7479 | 11100 | 0.1100 | 0.1198 |
| 0.0258 | 26.9891 | 11200 | 0.1111 | 0.1238 |
| 0.0312 | 27.2292 | 11300 | 0.1099 | 0.1204 |
| 0.0339 | 27.4704 | 11400 | 0.1150 | 0.1195 |
| 0.0262 | 27.7117 | 11500 | 0.1239 | 0.1162 |
| 0.0301 | 27.9530 | 11600 | 0.1177 | 0.1174 |
| 0.0296 | 28.1930 | 11700 | 0.1052 | 0.1143 |
| 0.0277 | 28.4343 | 11800 | 0.1110 | 0.1193 |
| 0.0301 | 28.6755 | 11900 | 0.1184 | 0.1192 |
| 0.027 | 28.9168 | 12000 | 0.1104 | 0.1187 |
| 0.0242 | 29.1568 | 12100 | 0.1301 | 0.1221 |
| 0.0254 | 29.3981 | 12200 | 0.1211 | 0.1202 |
| 0.0285 | 29.6393 | 12300 | 0.1130 | 0.1170 |
| 0.0252 | 29.8806 | 12400 | 0.1210 | 0.1154 |
| 0.0213 | 30.1206 | 12500 | 0.1146 | 0.1131 |
| 0.0233 | 30.3619 | 12600 | 0.1109 | 0.1141 |
| 0.0227 | 30.6031 | 12700 | 0.1187 | 0.1193 |
| 0.0259 | 30.8444 | 12800 | 0.1108 | 0.1145 |
| 0.0248 | 31.0844 | 12900 | 0.1087 | 0.1127 |
| 0.0234 | 31.3257 | 13000 | 0.1069 | 0.1077 |
| 0.0223 | 31.5669 | 13100 | 0.1057 | 0.1072 |
| 0.0246 | 31.8082 | 13200 | 0.0996 | 0.1095 |
| 0.0214 | 32.0483 | 13300 | 0.1174 | 0.1167 |
| 0.0207 | 32.2895 | 13400 | 0.1116 | 0.1097 |
| 0.0211 | 32.5308 | 13500 | 0.1287 | 0.1188 |
| 0.0214 | 32.7720 | 13600 | 0.1132 | 0.1115 |
| 0.0217 | 33.0121 | 13700 | 0.1066 | 0.1085 |
| 0.0199 | 33.2533 | 13800 | 0.0933 | 0.1055 |
| 0.02 | 33.4946 | 13900 | 0.1045 | 0.1095 |
| 0.0204 | 33.7358 | 14000 | 0.1040 | 0.1110 |
| 0.0209 | 33.9771 | 14100 | 0.1061 | 0.1083 |
| 0.0167 | 34.2171 | 14200 | 0.1059 | 0.1104 |
| 0.0153 | 34.4584 | 14300 | 0.0952 | 0.1067 |
| 0.0151 | 34.6996 | 14400 | 0.1122 | 0.1115 |
| 0.0216 | 34.9409 | 14500 | 0.0988 | 0.1098 |
| 0.0265 | 35.1809 | 14600 | 0.1086 | 0.1165 |
| 0.0153 | 35.4222 | 14700 | 0.1135 | 0.1085 |
| 0.0156 | 35.6634 | 14800 | 0.0998 | 0.1095 |
| 0.019 | 35.9047 | 14900 | 0.1104 | 0.1172 |
| 0.0163 | 36.1448 | 15000 | 0.1249 | 0.1139 |
| 0.0156 | 36.3860 | 15100 | 0.0988 | 0.1096 |
| 0.0186 | 36.6273 | 15200 | 0.1015 | 0.1139 |
| 0.0239 | 36.8685 | 15300 | 0.1078 | 0.1145 |
| 0.0692 | 37.1086 | 15400 | 0.1050 | 0.1175 |
| 0.0128 | 37.3498 | 15500 | 0.1036 | 0.1134 |
| 0.015 | 37.5911 | 15600 | 0.0967 | 0.1114 |
| 0.0151 | 37.8323 | 15700 | 0.1022 | 0.1122 |
| 0.0169 | 38.0724 | 15800 | 0.1157 | 0.1136 |
| 0.0168 | 38.3136 | 15900 | 0.0970 | 0.1129 |
| 0.015 | 38.5549 | 16000 | 0.1116 | 0.1136 |
| 0.0121 | 38.7961 | 16100 | 0.0970 | 0.1099 |
| 0.0189 | 39.0362 | 16200 | 0.1033 | 0.1153 |
| 0.015 | 39.2774 | 16300 | 0.1044 | 0.1143 |
| 0.0112 | 39.5187 | 16400 | 0.0963 | 0.1112 |
| 0.0116 | 39.7600 | 16500 | 0.0934 | 0.1106 |
| 0.0132 | 40.0 | 16600 | 0.0914 | 0.1082 |
| 0.0104 | 40.2413 | 16700 | 0.1099 | 0.1138 |
| 0.0108 | 40.4825 | 16800 | 0.1020 | 0.1120 |
| 0.0109 | 40.7238 | 16900 | 0.0995 | 0.1130 |
| 0.0154 | 40.9650 | 17000 | 0.0989 | 0.1121 |
| 0.0117 | 41.2051 | 17100 | 0.1113 | 0.1146 |
| 0.0123 | 41.4463 | 17200 | 0.0934 | 0.1074 |
| 0.0097 | 41.6876 | 17300 | 0.0829 | 0.1052 |
| 0.0141 | 41.9288 | 17400 | 0.0952 | 0.1060 |
| 0.0122 | 42.1689 | 17500 | 0.0914 | 0.1050 |
| 0.0104 | 42.4101 | 17600 | 0.0910 | 0.1064 |
| 0.0113 | 42.6514 | 17700 | 0.0959 | 0.1096 |
| 0.0107 | 42.8926 | 17800 | 0.1054 | 0.1073 |
| 0.0086 | 43.1327 | 17900 | 0.0956 | 0.1056 |
| 0.0106 | 43.3739 | 18000 | 0.0959 | 0.1080 |
| 0.011 | 43.6152 | 18100 | 0.0970 | 0.1071 |
| 0.0087 | 43.8565 | 18200 | 0.1060 | 0.1064 |
| 0.0089 | 44.0965 | 18300 | 0.1080 | 0.1054 |
| 0.0109 | 44.3378 | 18400 | 0.1072 | 0.1061 |
| 0.0104 | 44.5790 | 18500 | 0.0965 | 0.1047 |
| 0.0121 | 44.8203 | 18600 | 0.0942 | 0.1044 |
| 0.0056 | 45.0603 | 18700 | 0.0941 | 0.1034 |
| 0.0079 | 45.3016 | 18800 | 0.0920 | 0.1037 |
| 0.0072 | 45.5428 | 18900 | 0.0960 | 0.1049 |
| 0.0059 | 45.7841 | 19000 | 0.0977 | 0.1044 |
| 0.0082 | 46.0241 | 19100 | 0.0954 | 0.1053 |
| 0.008 | 46.2654 | 19200 | 0.0993 | 0.1063 |
| 0.0092 | 46.5066 | 19300 | 0.0984 | 0.1055 |
| 0.0064 | 46.7479 | 19400 | 0.0960 | 0.1051 |
| 0.0068 | 46.9891 | 19500 | 0.0962 | 0.1059 |
| 0.0061 | 47.2292 | 19600 | 0.1002 | 0.1071 |
| 0.0059 | 47.4704 | 19700 | 0.0964 | 0.1069 |
| 0.0067 | 47.7117 | 19800 | 0.0947 | 0.1064 |
| 0.0054 | 47.9530 | 19900 | 0.0976 | 0.1061 |
| 0.0077 | 48.1930 | 20000 | 0.1005 | 0.1060 |
| 0.008 | 48.4343 | 20100 | 0.1010 | 0.1060 |
| 0.0107 | 48.6755 | 20200 | 0.0991 | 0.1060 |
| 0.0054 | 48.9168 | 20300 | 0.0979 | 0.1059 |
| 0.0051 | 49.1568 | 20400 | 0.0978 | 0.1049 |
| 0.0046 | 49.3981 | 20500 | 0.0964 | 0.1059 |
| 0.0078 | 49.6393 | 20600 | 0.0960 | 0.1049 |
| 0.0056 | 49.8806 | 20700 | 0.0962 | 0.1047 |
### Framework versions
- Transformers 4.53.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Machlovi/SafePhi
|
Machlovi
| 2025-08-12T13:30:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"arxiv:2508.07063",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-15T20:51:30Z |
---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Machlovi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This LLM model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
### **📝 Load and Run Inference**
```bash
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
# Load the base model
base_model_name = "unsloth/Phi-4-unsloth-bnb-4bit",
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=base_model_name,
max_seq_length=2048, # Must match fine-tuning
load_in_4bit=True,
)
# Load the fine-tuned LoRA adapter
lora_model_name = "Machlovi/Phi4_Safe"
model = PeftModel.from_pretrained(model, lora_model_name)
# Run inference
input_text = "Why do we need to go to see something?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=4)
# Decode and print response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### **📝Inference with TextStreamer**
```bash
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages=[
{'content': 'why even let a tranny in they arent really even human',
'role': 'user'},
]
tokenizer = get_chat_template(
tokenizer,
chat_template = "Phi-4", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
)
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 10, use_cache = True)
Hate speech, personal attacks, and discrimination
```
### Prompt
`
alpaca_prompt = """<|im_start|>system<|im_sep|>
system<|im_end|>
<|im_start|>user<|im_sep|>
{}
<|im_end|>
<|im_start|>assistant<|im_sep|>
{}
"""
`
### Safety Policies Instructions
`
You are a policy expert trying to help determine whether a user prompt is in violation of the defined safety policies.
* "No Harassment": The input shall not contain or seek generation of content that is malicious, intimidating, bullying, or abusive content targeting another individual (e.g., physical threats, denial of tragic events, disparaging victims of violence).
* "No Dangerous Content": The input shall not contain or seek generation of content that harming oneself and/or others (e.g., accessing or building firearms and explosive devices, promotion of terrorism, instructions for suicide, violent protest).
* "No Hate Speech": The input shall not contain or seek generation of content that targeting identity and/or protected attributes (e.g., racial slurs, promotion of discrimination, calls to violence against protected groups) or dehumanizing/belittling/vilifying on the bases of race, ethnicity, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, caste, or any other protected status.
* "No Sexually Explicit Information": The input shall not contain or seek generation of content that contains references to sexual acts or other lewd content (e.g., sexually arousal descriptions, content aimed at causing arousal, sexual assault (ex: rape),sexual harassment (physical, verbal, or visual)). Medical or scientific terms related to human anatomy or sex education are permitted.
Does the human question violate the above principle? Your answer must start with 'Safe' or 'Unsafe'.
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories.`
# [SafePhi]
This resource accompanies our paper accepted in the **Late Breaking Work** track of **HCI International 2025**.
📄 **Paper Title:** _Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach_
📍 **Conference:** HCI International 2025 – Late Breaking Work
🔗 [Link to Proceedings](https://2025.hci.international/proceedings.html)
📄 [Link to Paper](https://doi.org/10.48550/arXiv.2508.07063)
## 📖 Citation
```bibtex
@misc{machlovi2025saferaimoderationevaluating,
title={Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach},
author={Naseem Machlovi and Maryam Saleki and Innocent Ababio and Ruhul Amin},
year={2025},
eprint={2508.07063},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.07063},
}
|
BootesVoid/cme5nwnj906qx6aq1vh0fzj8f_cme8k0toi01rsrts818a5joyz
|
BootesVoid
| 2025-08-12T13:29:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-12T13:29:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LUNA
---
# Cme5Nwnj906Qx6Aq1Vh0Fzj8F_Cme8K0Toi01Rsrts818A5Joyz
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LUNA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LUNA",
"lora_weights": "https://huggingface.co/BootesVoid/cme5nwnj906qx6aq1vh0fzj8f_cme8k0toi01rsrts818a5joyz/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme5nwnj906qx6aq1vh0fzj8f_cme8k0toi01rsrts818a5joyz', weight_name='lora.safetensors')
image = pipeline('LUNA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme5nwnj906qx6aq1vh0fzj8f_cme8k0toi01rsrts818a5joyz/discussions) to add images that show off what you’ve made with this LoRA.
|
dina1/GPT2_finetuned_with_wikitext
|
dina1
| 2025-08-12T13:28:44Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"language-modeling",
"text-generation",
"en",
"dataset:wikitext",
"license:mit",
"model-index",
"region:us"
] |
text-generation
| 2025-08-12T12:48:51Z |
---
language: en
datasets:
- wikitext
metrics:
- perplexity
model-index:
- name: GPT-2 Fine-tuned on Wikitext
results:
- task:
type: text-generation
name: Language Modeling
dataset:
name: Wikitext
type: wikitext
metrics:
- type: perplexity
value: 25.4
tags:
- gpt2
- language-modeling
- text-generation
license: mit
---
# GPT-2 Fine-tuned on Wikitext
This model is a fine-tuned version of `gpt2` on the **Wikitext** dataset.
It is trained for **causal language modeling**, making it capable of generating coherent English text given a prompt.
## Task
Language Modeling / Text Generation:
- Predicts the next word in a sequence.
- Can be used for creative writing, story generation, or general text completion.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
tokenizer = AutoTokenizer.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
prompt = "Once upon a time in a distant galaxy"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
tgrhn/whisper-large-v3-turbo_finetuned-4
|
tgrhn
| 2025-08-12T13:25:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-12T10:39:26Z |
---
library_name: transformers
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-turbo_finetuned-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-turbo_finetuned-4
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1403 | 2.3364 | 1000 | 0.1811 |
| 0.104 | 4.6729 | 2000 | 0.1767 |
### Framework versions
- Transformers 4.52.1
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
CarmelKron/results
|
CarmelKron
| 2025-08-12T13:25:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T13:24:33Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5316
- Accuracy: 0.8612
- F1: 0.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9518 | 1.0 | 2058 | 0.5721 | 0.7884 | 0.7883 |
| 0.4667 | 2.0 | 4116 | 0.5076 | 0.8389 | 0.8382 |
| 0.3336 | 3.0 | 6174 | 0.4843 | 0.8562 | 0.8556 |
| 0.2521 | 4.0 | 8232 | 0.5316 | 0.8612 | 0.8606 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
aleebaster/blockassist-bc-sly_eager_boar_1755003958
|
aleebaster
| 2025-08-12T13:23:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:23:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pimplefeet/omega_jHnzF8U
|
pimplefeet
| 2025-08-12T13:22:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T13:22:44Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
motza0025/blockassist-bc-darting_mottled_dog_1755003464
|
motza0025
| 2025-08-12T13:22:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting mottled dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:22:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting mottled dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roachkins/omega_Q6qjQ8t
|
roachkins
| 2025-08-12T13:22:37Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T13:22:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
nikhilkunap/llama-tuned
|
nikhilkunap
| 2025-08-12T13:19:47Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"region:us"
] | null | 2025-08-12T00:50:39Z |
---
base_model: meta-llama/Llama-3.1-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
sil-ai/senga_mat1_16-full-6
|
sil-ai
| 2025-08-12T13:17:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-12T04:28:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755003564
|
Sayemahsjn
| 2025-08-12T13:17:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-12T13:17:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sil-ai/senga_mat1_16-full-4
|
sil-ai
| 2025-08-12T13:17:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-12T04:28:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xiaowenjun/miniatureNMM
|
xiaowenjun
| 2025-08-12T13:17:16Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-08-11T02:28:43Z |
---
license: cc-by-nc-sa-4.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.