modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755871625
|
liukevin666
| 2025-08-22T14:09:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:08:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RaghavendraSqwish/qwen_sft_rank128_perplexity-llama_999samples
|
RaghavendraSqwish
| 2025-08-22T14:08:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T14:08:10Z |
---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RaghavendraSqwish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755870025
|
lisaozill03
| 2025-08-22T14:05:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:05:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Uppal-Farm-Girl-Viral-Video-Punjabi/Tractor.Girl.Viral.Video.Shakes.link.on.Social.Media
|
Uppal-Farm-Girl-Viral-Video-Punjabi
| 2025-08-22T14:04:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T14:04:28Z |
<animated-image data-catalyst=""><a href="https://newmovietv.online/leaked-video/?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
SonDePoisson/so101_test
|
SonDePoisson
| 2025-08-22T14:04:45Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T14:04:45Z |
---
license: apache-2.0
---
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755871235
|
lqpl
| 2025-08-22T14:04:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:01:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755869429
|
milliarderdol
| 2025-08-22T14:03:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:00:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755869810
|
ihsanridzi
| 2025-08-22T14:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:02:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_d_xoanBw
|
VoilaRaj
| 2025-08-22T14:02:53Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-22T13:58:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755869732
|
sampingkaca72
| 2025-08-22T14:01:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T14:01:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF
|
mradermacher
| 2025-08-22T14:00:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:maidacundo/annie-lite-v0.2.9-qwen3-8b",
"base_model:quantized:maidacundo/annie-lite-v0.2.9-qwen3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-22T08:01:00Z |
---
base_model: maidacundo/annie-lite-v0.2.9-qwen3-8b
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/maidacundo/annie-lite-v0.2.9-qwen3-8b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#annie-lite-v0.2.9-qwen3-8b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/annie-lite-v0.2.9-qwen3-8b-i1-GGUF/resolve/main/annie-lite-v0.2.9-qwen3-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
phospho-app/SvenBorodun-gr00t-so100-tictactoe-w6s0f
|
phospho-app
| 2025-08-22T13:59:52Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:phospho-ai/so100-tictactoe",
"region:us"
] |
robotics
| 2025-08-22T12:55:07Z |
---
datasets: phospho-ai/so100-tictactoe
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1146, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1157, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1325, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1162, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [phospho-ai/so100-tictactoe](https://huggingface.co/datasets/phospho-ai/so100-tictactoe)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Downtown-Case/ByteDance-Seed_Seed-OSS-36B-Base-woSyn-exl3-3.22bpw-hb6
|
Downtown-Case
| 2025-08-22T13:58:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"seed_oss",
"text-generation",
"vllm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"3-bit",
"exl3",
"region:us"
] |
text-generation
| 2025-08-22T13:55:36Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
language:
- en
---
# EXL3
Custom exl3 quantization, with 4bpw attention layers, 3bpw MLP layers, and an 6bpw lm_head.
Good for 128K context in 24GB VRAM.
# Original Model Card:
<div align="center">
👋 Hi, everyone!
<br>
We are <b>ByteDance Seed Team.</b>
</div>
<p align="center">
You can get to know us better through the following channels👇
<br>
<a href="https://seed.bytedance.com/">
<img src="https://img.shields.io/badge/Website-%231e37ff?style=for-the-badge&logo=bytedance&logoColor=white"></a>
</p>

# Seed-OSS Open-Source Models
<p align="center">
<a href="https://github.com/ByteDance-Seed/seed-oss">
<img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a>
<a href="https://github.com/ByteDance-Seed/seed-oss">
<img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a>
<a href="https://huggingface.co/ByteDance-Seed">
<img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a>
<br>
<a href="./LICENSE">
<img src="https://img.shields.io/badge/License-Apache2.0-blue"></a>
</p>
> [!NOTE]
> This model card is dedicated to the `Seed-OSS-36B-Base-woSyn` model.
## News
- [2025/08/20]🔥We release `Seed-OSS-36B-Base` (both with and without synthetic data versions) and `Seed-OSS-36B-Instruct`.
## Introduction
Seed-OSS is a series of open-source large language models developed by ByteDance's Seed Team, designed for powerful long-context, reasoning, agent and general capabilities, and versatile developer-friendly features. Although trained with only 12T tokens, Seed-OSS achieves excellent performance on several popular open benchmarks.
We release this series of models to the open-source community under the Apache-2.0 license.
> [!NOTE]
> Seed-OSS is primarily optimized for international (i18n) use cases.
### Key Features
- **Flexible Control of Thinking Budget**: Allowing users to flexibly adjust the reasoning length as needed. This capability of dynamically controlling the reasoning length enhances inference efficiency in practical application scenarios.
- **Enhanced Reasoning Capability**: Specifically optimized for reasoning tasks while maintaining balanced and excellent general capabilities.
- **Agentic Intelligence**: Performs exceptionally well in agentic tasks such as tool-using and issue resolving.
- **Research-Friendly**: Given that the inclusion of synthetic instruction data in pre-training may affect the post-training research, we released pre-trained models both with and without instruction data, providing the research community with more diverse options.
- **Native Long Context**: Trained with up-to-512K long context natively.
### Model Summary
Seed-OSS adopts the popular causal language model architecture with RoPE, GQA attention, RMSNorm and SwiGLU activation.
<div align="center">
| | |
|:---:|:---:|
| | **Seed-OSS-36B** |
| **Parameters** | 36B |
| **Attention** | GQA |
| **Activation Function** | SwiGLU |
| **Number of Layers** | 64 |
| **Number of QKV Heads** | 80 / 8 / 8 |
| **Head Size** | 128 |
| **Hidden Size** | 5120 |
| **Vocabulary Size** | 155K |
| **Context Length** | 512K |
| **RoPE Base Frequency** | 1e7 |
</div>
## Evaluation Results
### Seed-OSS-36B-Base
Incorporating synthetic instruction data into pretraining leads to improved performance on most benchmarks. We adopt the version augmented with synthetic instruction data (i.e., *w/ syn.*) as `Seed-OSS-36B-Base`. We also release `Seed-OSS-36B-Base-woSyn` trained without such data (i.e., *w/o syn.*), offering the community a high-performance foundation model unaffected by synthetic instruction data.
<div align="center">
<table>
<thead>
<tr>
<th align="center">Benchmark</th>
<th align="center"><sup><a href="https://seed.bytedance.com/en/seed1_6">Seed1.6-Base</a></sup></th>
<th align="center"><sup>Qwen3-30B-A3B-Base-2507*</sup></th>
<th align="center"><sup>Qwen2.5-32B-Base*</sup></th>
<th align="center"><sup>Seed-OSS-36B-Base<br>(<i>w/ syn.</i>)</sup></th>
<th align="center"><sup>Seed-OSS-36B-Base-woSyn<br>(<i>w/o syn.</i>)</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan=6><strong>Knowledge</strong></td>
</tr>
<tr>
<td align="center">MMLU-Pro</td>
<td align="center">70</td>
<td align="center">59.8</td>
<td align="center">58.5 (55.1)</td>
<td align="center"><b>65.1</b></td>
<td align="center">60.4</td>
</tr>
<tr>
<td align="center">MMLU</td>
<td align="center">88.8</td>
<td align="center">82.7</td>
<td align="center">84 (83.3)</td>
<td align="center"><b>84.9</b></td>
<td align="center">84.8</td>
</tr>
<tr>
<td align="center">TriviaQA</td>
<td align="center">91</td>
<td align="center">76.2</td>
<td align="center">76</td>
<td align="center"><b>82.1</b></td>
<td align="center">81.9</td>
</tr>
<tr>
<td align="center">GPQA-D</td>
<td align="center">43.4</td>
<td align="center"><b>37</b></td>
<td align="center">29.3</td>
<td align="center">31.7</td>
<td align="center">35.2</td>
</tr>
<tr>
<td align="center">SimpleQA</td>
<td align="center">17.1</td>
<td align="center">7.2</td>
<td align="center">6.1</td>
<td align="center">5.8</td>
<td align="center"><b>7.4</b></td>
</tr>
<tr>
<td align="center" colspan=6><strong>Reasoning</strong></td>
</tr>
<tr>
<td align="center">BBH</td>
<td align="center">92.1</td>
<td align="center">81.4</td>
<td align="center">79.1 (84.5)</td>
<td align="center"><b>87.7</b></td>
<td align="center">87.2</td>
</tr>
<tr>
<td align="center">AGIEval-en</td>
<td align="center">78</td>
<td align="center">66.4</td>
<td align="center">65.6</td>
<td align="center"><b>70.7</b></td>
<td align="center">70.1</td>
</tr>
<tr>
<td align="center" colspan=6><strong>Math</strong></td>
</tr>
<tr>
<td align="center">GSM8K</td>
<td align="center">93.1</td>
<td align="center">87</td>
<td align="center">87.5 (92.9)</td>
<td align="center"><b>90.8</b></td>
<td align="center">90.3</td>
</tr>
<tr>
<td align="center">MATH</td>
<td align="center">72.9</td>
<td align="center">61.1</td>
<td align="center">63.5 (57.7)</td>
<td align="center"><b>81.7</b></td>
<td align="center">61.3</td>
</tr>
<tr>
<td align="center" colspan=6><strong>Coding</strong></td>
</tr>
<tr>
<td align="center">MBPP</td>
<td align="center">83.6</td>
<td align="center">78.8</td>
<td align="center">77.8 (84.5)</td>
<td align="center"><b>80.6</b></td>
<td align="center">74.6</td>
</tr>
<tr>
<td align="center">HumanEval</td>
<td align="center">78</td>
<td align="center">70.7</td>
<td align="center">47.6 (58.5)</td>
<td align="center"><b>76.8</b></td>
<td align="center">75.6</td>
</tr>
</tbody>
</table>
</div>
<sup>
- <b>Bold</b> denotes open-source SOTA.
</sup><br/><sup>
- "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)".
</sup>
### Seed-OSS-36B-Instruct
<div align="center">
<table>
<thead>
<tr>
<th align="center">Benchmark</th>
<th align="center"><sup><a href="https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=doubao-seed-1-6-thinking">Seed1.6-Thinking-0715</a></sup></th>
<th align="center"><sup>OAI-OSS-20B*</sup></th>
<th align="center"><sup>Qwen3-30B-A3B-Thinking-2507*</sup></th>
<th align="center"><sup>Qwen3-32B*</sup></th>
<th align="center"><sup>Gemma3-27B</sup></th>
<th align="center"><sup>Seed-OSS-36B-Instruct</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan=7><strong>Knowledge</strong></td>
</tr>
<tr>
<td align="center">MMLU-Pro</td>
<td align="center">86.6</td>
<td align="center">76.2</td>
<td align="center"><ins>81.9</ins> (80.9)</td>
<td align="center">81.8</td>
<td align="center">67.5</td>
<td align="center"><b>82.7</b></td>
</tr>
<tr>
<td align="center">MMLU</td>
<td align="center">90.6</td>
<td align="center">81.7 (85.3)</td>
<td align="center"><ins>86.9</ins></td>
<td align="center">86.2</td>
<td align="center">76.9</td>
<td align="center"><b>87.4</b></td>
</tr>
<tr>
<td align="center">GPQA-D</td>
<td align="center">80.7</td>
<td align="center"><b>72.2</b> (71.5)</td>
<td align="center"><ins>71.4</ins> (73.4)</td>
<td align="center">66.7 (68.4)</td>
<td align="center">42.4</td>
<td align="center"><ins>71.4</ins></td>
</tr>
<tr>
<td align="center">SuperGPQA</td>
<td align="center">63.4</td>
<td align="center">50.1</td>
<td align="center"><b>57.3</b> (56.8)</td>
<td align="center">49.3</td>
<td align="center">-</td>
<td align="center"><ins>55.7</ins></td>
</tr>
<tr>
<td align="center">SimpleQA</td>
<td align="center">23.7</td>
<td align="center">6.7</td>
<td align="center"><b>23.6</b></td>
<td align="center">8.6</td>
<td align="center"><ins>10</ins></td>
<td align="center">9.7</td>
</tr>
<tr>
<td align="center" colspan=7><strong>Math</strong></td>
</tr>
<tr>
<td align="center">AIME24</td>
<td align="center">90.3</td>
<td align="center"><b>92.7</b> (92.1)</td>
<td align="center">87.7</td>
<td align="center">82.7 (81.4)</td>
<td align="center">-</td>
<td align="center"><ins>91.7</ins></td>
</tr>
<tr>
<td align="center">AIME25</td>
<td align="center">86</td>
<td align="center"><b>90.3</b> (91.7)</td>
<td align="center">81.3 (85)</td>
<td align="center">73.3 (72.9)</td>
<td align="center">-</td>
<td align="center"><ins>84.7</ins></td>
</tr>
<tr>
<td align="center">BeyondAIME</td>
<td align="center">60</td>
<td align="center"><b>69</b></td>
<td align="center">56</td>
<td align="center">29</td>
<td align="center">-</td>
<td align="center"><ins>65</ins></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Reasoning</strong></td>
</tr>
<tr>
<td align="center">ArcAGI V2</td>
<td align="center">50.3</td>
<td align="center"><b>41.7</b></td>
<td align="center">37.8</td>
<td align="center">14.4</td>
<td align="center">-</td>
<td align="center"><ins>40.6</ins></td>
</tr>
<tr>
<td align="center">KORBench</td>
<td align="center">74.8</td>
<td align="center"><b>72.3</b></td>
<td align="center">70.2</td>
<td align="center">65.4</td>
<td align="center">-</td>
<td align="center"><ins>70.6</ins></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Coding</strong></td>
</tr>
<tr>
<td align="center">LiveCodeBench v6<br/><sup>(02/2025-05/2025)</sup></td>
<td align="center">66.8</td>
<td align="center"><ins>63.8</ins></td>
<td align="center">60.3 (66)</td>
<td align="center">53.4</td>
<td align="center">-</td>
<td align="center"><b>67.4</b></td>
</tr>
<tr>
<td align="center">HLE</td>
<td align="center">13.9</td>
<td align="center"><b>12.7</b> (10.9)</td>
<td align="center">8.7</td>
<td align="center">6.9</td>
<td align="center">-</td>
<td align="center"><ins>10.1</ins></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Instruction Following</strong></td>
</tr>
<tr>
<td align="center">IFEval</td>
<td align="center">86.3</td>
<td align="center"><b>92.8</b></td>
<td align="center">88 (88.9)</td>
<td align="center">88.4 (85)</td>
<td align="center"><ins>90.4</ins></td>
<td align="center">85.8</td>
</tr>
<tr>
<td align="center" colspan=7><strong>Agent</strong></td>
</tr>
<tr>
<td align="center">TAU1-Retail</td>
<td align="center">63</td>
<td align="center">(54.8)</td>
<td align="center"><ins>58.7</ins> (67.8)</td>
<td align="center">40.9</td>
<td align="center">-</td>
<td align="center"><b>70.4</b></td>
</tr>
<tr>
<td align="center">TAU1-Airline</td>
<td align="center">49</td>
<td align="center">(38)</td>
<td align="center"><b>47</b> (48)</td>
<td align="center">38</td>
<td align="center">-</td>
<td align="center"><ins>46</ins></td>
</tr>
<tr>
<td align="center">SWE-Bench Verified<br/><sup>(OpenHands)</sup></td>
<td align="center">41.8</td>
<td align="center"><b>(60.7)</b></td>
<td align="center">31</td>
<td align="center">23.4</td>
<td align="center">-</td>
<td align="center"><ins>56</ins></td>
</tr>
<tr>
<td align="center">SWE-Bench Verified<br/><sup>(AgentLess 4*10)</sup></td>
<td align="center">48.4</td>
<td align="center">-</td>
<td align="center">33.5</td>
<td align="center"><ins>39.7</ins></td>
<td align="center">-</td>
<td align="center"><b>47</b></td>
</tr>
<tr>
<td align="center">Multi-SWE-Bench</td>
<td align="center">17.7</td>
<td align="center">-</td>
<td align="center"><ins>9.5</ins></td>
<td align="center">7.7</td>
<td align="center">-</td>
<td align="center"><b>17</b></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Multilingualism</strong></td>
</tr>
<tr>
<td align="center">MMMLU</td>
<td align="center">84.3</td>
<td align="center">77.4 (75.7)</td>
<td align="center"><b>79</b></td>
<td align="center"><b>79</b> (80.6)</td>
<td align="center">-</td>
<td align="center"><ins>78.4</ins></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Long Context</strong></td>
</tr>
<tr>
<td align="center">RULER<br/><sup>(128K)</sup></td>
<td align="center">94.5</td>
<td align="center">78.7</td>
<td align="center"><ins>94.5</ins></td>
<td align="center">77.5</td>
<td align="center">-</td>
<td align="center"><b>94.6</b></td>
</tr>
<tr>
<td align="center" colspan=7><strong>Safety</strong></td>
</tr>
<tr>
<td align="center">AIR-Bench</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">75.6</td>
</tr>
</tbody>
</table>
</div>
<sup>
- <b>Bold</b> denotes open-source SOTA. <ins>Underlined</ins> indicates the second place in the open-source model.
</sup><br/><sup>
- "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)". Some results have been omitted due to the failure of the evaluation run.
</sup><br/><sup>
- The results of Gemma3-27B are sourced directly from its technical report.
</sup><br/><sup>
- Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.
</sup><br/><sup>
</sup>
> [!NOTE]
> We recommend sampling with `temperature=1.1` and `top_p=0.95`.
### Thinking Budget
Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.

Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.
```
<seed:think>
Got it, let's try to solve this problem step by step. The problem says ... ...
<seed:cot_budget_reflect>I have used 129 tokens, and there are 383 tokens remaining for use.</seed:cot_budget_reflect>
Using the power rule, ... ...
<seed:cot_budget_reflect>I have used 258 tokens, and there are 254 tokens remaining for use.</seed:cot_budget_reflect>
Alternatively, remember that ... ...
<seed:cot_budget_reflect>I have used 393 tokens, and there are 119 tokens remaining for use.</seed:cot_budget_reflect>
Because if ... ...
<seed:cot_budget_reflect>I have exhausted my token budget, and now I will start answering the question.</seed:cot_budget_reflect>
</seed:think>
To solve the problem, we start by using the properties of logarithms to simplify the given equations: (full answer omitted).
```
If no thinking budget is set (default mode), Seed-OSS will initiate thinking with unlimited length. If a thinking budget is specified, users are advised to prioritize values that are integer multiples of 512 (e.g., 512, 1K, 2K, 4K, 8K, or 16K), as the model has been extensively trained on these intervals. Models are instructed to output a direct response when the thinking budget is 0, and we recommend setting any budget below 512 to this value.
## Quick Start
```shell
pip3 install -r requirements.txt
pip install git+ssh://git@github.com/Fazziekey/transformers.git@seed-oss
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "ByteDance-Seed/Seed-OSS-36B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "How to make pasta?"},
]
tokenized_chat = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
thinking_budget=512 # control the thinking budget
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
```
## Inference
### Download Model
Download Seed-OSS checkpoint to `./Seed-OSS-36B-Instruct`
### Transformers
The `generate.py` script provides a simple interface for model inference with configurable options.
#### Basic Usage
```shell
cd inference
python3 generate.py --model_path /path/to/model
```
#### Key Parameters
| Parameter | Description |
|-----------|-------------|
| `--model_path` | Path to the pretrained model directory (required) |
| `--prompts` | Input prompts (default: sample cooking/code questions) |
| `--max_new_tokens` | Maximum tokens to generate (default: 4096) |
| `--attn_implementation` | Attention mechanism: `flash_attention_2` (default) or `eager` |
| `--load_in_4bit/8bit` | Enable 4-bit/8-bit quantization (reduces memory usage) |
| `--thinking_budget` | Thinking budget in tokens (default: -1 for unlimited budget) |
#### Quantization Examples
```shell
# 8-bit quantization
python3 generate.py --model_path /path/to/model --load_in_8bit True
# 4-bit quantization
python3 generate.py --model_path /path/to/model --load_in_4bit True
```
#### Custom Prompts
```shell
python3 generate.py --model_path /path/to/model --prompts "['What is machine learning?', 'Explain quantum computing']"
```
### vLLM
Use vllm >= 0.10.0 or higher for inference.
- First install vLLM with Seed-OSS support version:
```shell
VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+ssh://git@github.com/FoolPlayer/vllm.git@seed-oss
```
- Start vLLM API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host localhost \
--port 4321 \
--enable-auto-tool-choice \
--tool-call-parser seed_oss \
--trust-remote-code \
--model ./Seed-OSS-36B-Instruct \
--chat-template ./Seed-OSS-36B-Instruct/chat_template.jinja \
--tensor-parallel-size 8 \
--dtype bfloat16 \
--served-model-name seed_oss
```
- Test with OpenAI client:
Chat
```shell
python3 inference/vllm_chat.py
```
Tool Call
```shell
python3 inference/vllm_tool_call.py
```
## Model Card
See [MODEL_CARD](./MODEL_CARD.md).
## License
This project is licensed under Apache-2.0. See the [LICENSE](./LICENSE) flie for details.
## Citation
```bibtex
@misc{seed2025seed-oss,
author={ByteDance Seed Team},
title={Seed-OSS Open-Source Models},
year={2025},
howpublished={\url{https://github.com/ByteDance-Seed/seed-oss}}
}
```
## About [ByteDance Seed Team](https://seed.bytedance.com/)
Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755869335
|
vwzyrraz7l
| 2025-08-22T13:56:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:56:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-aki-alpha0.08-var-imdb
|
g-assismoraes
| 2025-08-22T13:56:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T13:41:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightmedia/QiMing-Holos-Plus-4B-qx6-mlx
|
nightmedia
| 2025-08-22T13:55:31Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"qwen",
"unsloth",
"qiming",
"qiming-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"chat",
"lora",
"philosophy-driven-ai",
"text-generation",
"conversational",
"zh",
"en",
"base_model:aifeifei798/QiMing-Holos-Plus-4B",
"base_model:adapter:aifeifei798/QiMing-Holos-Plus-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-22T13:40:27Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- qwen
- qwen3
- unsloth
- qiming
- qiming-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- chat
- lora
- philosophy-driven-ai
- mlx
pipeline_tag: text-generation
library_name: mlx
base_model: aifeifei798/QiMing-Holos-Plus-4B
---
# QiMing-Holos-Plus-4B-qx6-mlx
This model [QiMing-Holos-Plus-4B-qx6-mlx](https://huggingface.co/QiMing-Holos-Plus-4B-qx6-mlx) was
converted to MLX format from [aifeifei798/QiMing-Holos-Plus-4B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("QiMing-Holos-Plus-4B-qx6-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755869295
|
calegpedia
| 2025-08-22T13:55:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:55:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755870746
|
lqpl
| 2025-08-22T13:55:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:53:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755868310
|
rvipitkirubbe
| 2025-08-22T13:55:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:54:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dimasbetioli/test
|
dimasbetioli
| 2025-08-22T13:53:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T13:53:50Z |
---
license: apache-2.0
---
|
prxshetty/first_qa_model
|
prxshetty
| 2025-08-22T13:52:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-22T13:24:39Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: first_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.7716 |
| 2.0143 | 2.0 | 500 | 2.5486 |
| 2.0143 | 3.0 | 750 | 2.6509 |
### Framework versions
- Transformers 4.55.3
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
wATCH-Hadeer-Abdelrazik-video-Clips/full.videos.hadeer.abdel-razik.Viral.Video.Official.Tutorial
|
wATCH-Hadeer-Abdelrazik-video-Clips
| 2025-08-22T13:52:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T13:51:08Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/52jc3rtk" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
VoilaRaj/81_d_V5Vy2U
|
VoilaRaj
| 2025-08-22T13:50:51Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-22T13:47:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DreadPoor/Primeval-TEST_2
|
DreadPoor
| 2025-08-22T13:49:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T06:28:10Z |
---
library_name: transformers
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# Primeval-TEST_2
Primeval-TEST_2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## 🧩 Configuration
```yaml
models:
- model: DreadPoor/Irix-12B-Model_Stock
- model: SicariusSicariiStuff/Impish_Nemo_12B
merge_method: model_stock
base_model: unsloth/Mistral-Nemo-Base-2407+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
normalize: true
int8_mask: true
dtype: bfloat16
```
|
MILICA-Y-ANGEL-DAVID-VIDEO/FULL.ORIGINAL.MILICA.Y.ANGEL.DAVID.VIDEO.NEW.LINK
|
MILICA-Y-ANGEL-DAVID-VIDEO
| 2025-08-22T13:48:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T13:44:09Z |
<a href="https://trendriddle.cfd/MMMMMN"> 🌐 Click Here To link FULL.VIDEO milica y angel david erome debut angel david
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://trendriddle.cfd/MMMMMN"> 🌐 Click Here To link ORIGINAL milica y angel david erome debut angel david milica video
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755868870
|
quantumxnode
| 2025-08-22T13:47:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:47:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
upvantage/oss-h-from-google-merged
|
upvantage
| 2025-08-22T13:46:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T13:40:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Koberocks156/blockassist-bc-scruffy_monstrous_swan_1755868501
|
Koberocks156
| 2025-08-22T13:44:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy monstrous swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:44:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy monstrous swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755869048
|
Medved444
| 2025-08-22T13:44:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:44:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anwensmythadv/blockassist-bc-pawing_stocky_walrus_1755868477
|
anwensmythadv
| 2025-08-22T13:44:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing stocky walrus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:44:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing stocky walrus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cxstam/distilbert-base-uncased-finetuned-imdb
|
cxstam
| 2025-08-22T13:43:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-22T12:44:11Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4892
- Model Preparation Time: 0.0016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| 2.6814 | 1.0 | 157 | 2.4929 | 0.0016 |
| 2.5825 | 2.0 | 314 | 2.4480 | 0.0016 |
| 2.5258 | 3.0 | 471 | 2.4823 | 0.0016 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
tammycra121/blockassist-bc-marine_rangy_eel_1755868488
|
tammycra121
| 2025-08-22T13:42:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine rangy eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:42:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine rangy eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_d_gzMRc6
|
VoilaRaj
| 2025-08-22T13:42:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-22T13:38:29Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aislingmcintosh/blockassist-bc-pale_masked_salmon_1755868432
|
aislingmcintosh
| 2025-08-22T13:42:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pale masked salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:42:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pale masked salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
geobase/gghl-oriented-object-detection
|
geobase
| 2025-08-22T13:41:35Z | 142 | 0 | null |
[
"onnx",
"geospatial",
"geobase",
"object-detection",
"oriented-object-detection",
"arxiv:2109.12848",
"license:gpl-3.0",
"region:us"
] |
object-detection
| 2025-03-12T11:20:45Z |
---
tags:
- geospatial
- geobase
- object-detection
- oriented-object-detection
license: gpl-3.0
---
| <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" width="28" height="28"> | [GeoAi](https://www.npmjs.com/package/geoai) |
|---|---|
> `task = oriented-object-detection`
### 🛠 Model Purpose
This model is part of the **[GeoAi](https://github.com/decision-labs/geoai.js)** javascript library.
**GeoAi** enables geospatial AI inference **directly in the browser or Node.js** without requiring a heavy backend.
**GeoAi** pipeline accepts **geospatial polygons** as input (in GeoJSON format) and outputs results as a **GeoJSON FeatureCollection**, ready for use with libraries like **Leaflet** and **Mapbox GL**.
<video controls autoplay loop width="1024" height="720" src="https://geobase-docs.s3.amazonaws.com/geobase-ai-assets/oriented-object-detection.mp4"></video>
---
### 🚀 Demo
Explore the model in action with the interactive [Demo](https://docs.geobase.app/geoai-live/tasks/oriented-object-detection).
### 📦 Model Information
- **Architecture**: [GGHL](https://arxiv.org/pdf/2109.12848)
- **Source Model**: https://github.com/Shank2358/GGHL
- **Quantization**: Yes
---
### 💡 Example Usage
```javascript
import { geoai } from "geoai";
export const ESRI_CONFIG = {
provider: "esri" as const,
serviceUrl: "https://server.arcgisonline.com/ArcGIS/rest/services",
serviceName: "World_Imagery",
tileSize: 256,
attribution: "ESRI World Imagery",
};
// Example polygon (GeoJSON)
const polygon = {
type: "Feature",
properties: {},
geometry: {
coordinates: [
[
[114.9750523641963, -3.4852698831433315],
[114.97650060618412, -3.4852559566003265],
[114.97644200679741, -3.487269732522151],
[114.97511654447737, -3.4871973146522492],
[114.9750523641963, -3.4852698831433315]
],
],
type: "Polygon",
},
} as GeoJSON.Feature;
// Initialize pipeline
const pipeline = await geoai.pipeline(
[{ task: "oriented-object-detection" }],
providerParams : ESRI_CONFIG
);
// Run detection
const result = await pipeline.inference({
inputs: { polygon }
});
// Sample output format
// {
// "detections": {
// "type": "FeatureCollection",
// "features": [
// {
// "type": "Feature",
// "properties": {
// "class_name": "bridge"
// "score": 0.6177883799306727
// },
// "geometry": {
// "type": "Polygon",
// "coordinates": [
// [
// [114.97610299609376, -3.4866272343749998],
// [114.97612444921876, -3.4866915],
// [114.97552912500001, -3.48678254296875],
// [114.97550767187501, -3.486712921875],
// [114.97610299609376, -3.4866272343749998]
// ]
// ]
// }
// },
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// ]
// },
// "geoRawImage": GeoRawImage {data: Uint8ClampedArray(1048576), width: 512, height: 512, channels: 4, bounds: {…}, …}
// }
```
### 📖 Documentation & Demo
- GeoBase Docs: https://docs.geobase.app/geoai
- NPM Package: https://www.npmjs.com/package/geoai
- Demo Playground: https://docs.geobase.app/geoai-live/tasks/oriented-object-detection
- GitHub Repo: https://github.com/decision-labs/geoai.js
|
nightmedia/QiMing-Holos-Plus-4B-qx6-hi-mlx
|
nightmedia
| 2025-08-22T13:40:13Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"qwen",
"unsloth",
"qiming",
"qiming-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"chat",
"lora",
"philosophy-driven-ai",
"text-generation",
"conversational",
"zh",
"en",
"base_model:aifeifei798/QiMing-Holos-Plus-4B",
"base_model:adapter:aifeifei798/QiMing-Holos-Plus-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-22T13:24:58Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- qwen
- qwen3
- unsloth
- qiming
- qiming-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- chat
- lora
- philosophy-driven-ai
- mlx
pipeline_tag: text-generation
base_model: aifeifei798/QiMing-Holos-Plus-4B
library_name: mlx
---
# QiMing-Holos-Plus-4B-qx6-hi-mlx
This model [QiMing-Holos-Plus-4B-qx6-hi-mlx](https://huggingface.co/QiMing-Holos-Plus-4B-qx6-hi-mlx) was
converted to MLX format from [aifeifei798/QiMing-Holos-Plus-4B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("QiMing-Holos-Plus-4B-qx6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
elleshavff/blockassist-bc-horned_energetic_parrot_1755868409
|
elleshavff
| 2025-08-22T13:40:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"horned energetic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:39:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- horned energetic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755868335
|
koloni
| 2025-08-22T13:39:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:39:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
geobase/oil-storage-tank-detection
|
geobase
| 2025-08-22T13:38:47Z | 157 | 1 | null |
[
"onnx",
"geospatial",
"geobase",
"oil-storage-tank-detection",
"yolox",
"license:apache-2.0",
"region:us"
] | null | 2025-04-15T04:39:53Z |
---
tags:
- geospatial
- geobase
- oil-storage-tank-detection
- yolox
license: apache-2.0
---
| <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" width="28" height="28"> | [GeoAi](https://www.npmjs.com/package/geoai) |
|---|---|
> `task = oil-storage-tank-detection`
### 🛠 Model Purpose
This model is part of the **[GeoAi](https://github.com/decision-labs/geoai.js)** javascript library.
**GeoAi** enables geospatial AI inference **directly in the browser or Node.js** without requiring a heavy backend.
**GeoAi** pipeline accepts **geospatial polygons** as input (in GeoJSON format) and outputs results as a **GeoJSON FeatureCollection**, ready for use with libraries like **Leaflet** and **Mapbox GL**.
<video controls autoplay loop width="1024" height="720" src="https://geobase-docs.s3.amazonaws.com/geobase-ai-assets/oil-storage-tank-detection.mp4"></video>
---
### 🚀 Demo
Explore the model in action with the interactive [Demo](https://docs.geobase.app/geoai-live/tasks/oil-storage-tank-detection).
### 📦 Model Information
- **Architecture**: YOLOX
- **Source Model**: See the python notebook file in the repository or at [kaggle](https://www.kaggle.com/code/jeffaudi/oil-storage-detection-on-airbus-imagery-with-yolox)
- **Quantization**: Yes
---
### 💡 Example Usage
```javascript
import { geoai } from "geoai";
// Example polygon (GeoJSON)
const polygon = {
type: "Feature",
properties: {},
geometry: {
coordinates: [
[
[54.68328454841432, 24.762795008216074],
[54.684149555501506, 24.756239186864462],
[54.69506195259541, 24.755710476520136],
[54.694196945508224, 24.76320284742259],
[54.68328454841432, 24.762795008216074],
],
],
type: "Polygon",
},
} as GeoJSON.Feature;
// Initialize pipeline
const pipeline = await geoai.pipeline(
[{ task: "oil-storage-tank-detection" }],
providerParams
);
// Run detection
const result = await pipeline.inference({
inputs: { polygon }
});
// Sample output format
// {
// "detections": {
// "type": "FeatureCollection",
// "features": [
// {
// "type": "Feature",
// "properties": {
// "confidence": 0.8438083529472351
// },
// "geometry": {
// "type": "Polygon",
// "coordinates": [
// [
// [54.69479163045772, 24.766579711184693],
// [54.69521093930892, 24.766579711184693],
// [54.69521093930892, 24.766203991224682],
// [54.69479163045772, 24.766203991224682],
// [54.69479163045772, 24.766579711184693],
// ]
// ]
// }
// },
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// ]
// },
// "geoRawImage": GeoRawImage {data: Uint8ClampedArray(1048576), width: 512, height: 512, channels: 4, bounds: {…}, …}
// }
```
### 📖 Documentation & Demo
- GeoBase Docs: https://docs.geobase.app/geoai
- NPM Package: https://www.npmjs.com/package/geoai
- Demo Playground: https://docs.geobase.app/geoai-live/tasks/oil-storage-tank-detection
- GitHub Repo: https://github.com/decision-labs/geoai.js
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-v2_1641
|
luckeciano
| 2025-08-22T13:38:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T10:47:00Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-v2_1641
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-v2_1641
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-v2_1641", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/c8mq8lss)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Yiming-M/ZIP-N
|
Yiming-M
| 2025-08-22T13:38:00Z | 0 | 0 | null |
[
"arxiv:2506.19955",
"license:mit",
"model-index",
"region:us"
] | null | 2025-07-31T16:07:31Z |
---
license: mit
model-index:
- name: ZIP-N
results:
- task:
type: crowd_counting
dataset:
name: ShanghaiTech A
type: shanghaitech_a
metrics:
- name: MAE
type: MAE
value: 58.86
- name: RMSE
type: RMSE
value: 94.63
- name: NAE
type: NAE
value: 0.1415
- task:
type: crowd_counting
dataset:
name: ShanghaiTech B
type: shanghaitech_b
metrics:
- name: MAE
type: MAE
value: 7.74
- name: RMSE
type: RMSE
value: 12.14
- name: NAE
type: NAE
value: 0.0633
- task:
type: crowd_counting
dataset:
name: UCF-QNRF
type: ucf_qnrf
metrics:
- name: MAE
type: MAE
value: 86.46
- name: RMSE
type: RMSE
value: 147.64
- name: NAE
type: NAE
value: 0.1260
- task:
type: crowd_counting
dataset:
name: NWPU-Crowd-Val
type: nwpu_crowd_val
metrics:
- name: MAE
type: MAE
value: 56.27
- name: RMSE
type: RMSE
value: 292.53
- name: NAE
type: NAE
value: 0.1406
- task:
type: crowd_counting
dataset:
name: NWPU-Crowd-Test
type: nwpu_crowd_test
metrics:
- name: MAE
type: MAE
value: 75.03
- name: RMSE
type: RMSE
value: 334.54
- name: NAE
type: NAE
value: 0.1759
---
# ZIP: Scalable Crowd Counting via Zero-Inflated Poisson Modeling
<!--
[](https://paperswithcode.com/sota/crowd-counting-on-shanghaitech-a?p=ebc-zip-improving-blockwise-crowd-counting)
[](https://paperswithcode.com/sota/crowd-counting-on-shanghaitech-b?p=ebc-zip-improving-blockwise-crowd-counting)
[](https://paperswithcode.com/sota/crowd-counting-on-ucf-qnrf?p=ebc-zip-improving-blockwise-crowd-counting)
[](https://paperswithcode.com/sota/crowd-counting-on-nwpu-crowd-val?p=ebc-zip-improving-blockwise-crowd-counting) -->
<a href="https://www.buymeacoffee.com/yiming.ma" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a>
The official implementation of the paper [*ZIP: Scalable Crowd Counting via Zero-Inflated Poisson Modeling*](https://arxiv.org/pdf/2506.19955).
## Reults
| **Variants** | **Size (M)** | **GFLOPS (on HD)** | **SHA (MAE)** | **SHA (RMSE)** | **SHA (NAE, %)** | **SHB (MAE)** | **SHB (RMSE)** | **SHB (NAE, %)** | **QNRF (MAE)** | **QNRF (RMSE)** | **QNRF (NAE, %)** | **NWPU-Val (MAE)** | **NWPU-Val (RMSE)** | **NWPU-Val (NAE, %)** | **NWPU-Test (MAE)** | **NWPU-Test (RMSE)** | **NWPU-Test (NAE, %)** |
| ------------ | ------------ | ------------------ | ------------- | -------------- | ---------------- | ------------- | -------------- | ---------------- | -------------- | --------------- | ----------------- | ------------------ | ------------------- | --------------------- | ------------------- | -------------------- | ---------------------- |
| -P (Pico) | 0.81 | 6.46 | 71.18 | 109.60 | 16.69 | 8.23 | 12.62 | 6.98 | 96.29 | 161.82 | 14.40 | 66.94 | 223.52 | 14.97 | 79.91 | 327.17 | 21.42 |
| -N (Nano) | 3.36 | 24.73 | 58.86 | 94.63 | 14.15 | 7.74 | 12.14 | 6.33 | 86.46 | 147.64 | 12.60 | 56.27 | 292.53 | 14.06 | 75.03 | 334.54 | 17.59 |
| -T (Tiny) | 10.53 | 61.39 | 56.36 | 86.09 | 13.26 | 6.67 | 9.90 | 5.52 | 76.02 | 129.40 | 11.10 | 46.74 | 145.32 | 11.31 | 66.43 | 323.27 | 13.45 |
| -S (Small) | 33.60 | 242.43 | 55.17 | 88.99 | 11.97 | 5.83 | 9.21 | 4.58 | 73.32 | 125.09 | 10.40 | 31.66 | 77.11 | 9.61 | 62.89 | 309.02 | 12.09 |
| -B (Base) | 105.60 | 800.99 | 47.81 | 75.04 | 11.06 | 5.51 | 8.63 | 4.48 | 69.46 | 121.88 | 10.18 | 28.26 | 64.84 | 9.20 | 60.09 | 298.95 | 10.44 |
## Step 1: Install Dependencies
```bash
pip install -r requirements.txt
```
## Step 2: Download Processed Datasets
- **ShanghaiTech A**: [sha.zip](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/sha.zip)
- **ShanghaiTech B**: [shb.zip](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/shb.zip)
- **UCF-QNRF**: [qnrf.zip](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/qnrf.zip), [qnrf.z01](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/qnrf.z01)
- **NWPU-Crowd**: [nwpu.zip](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.zip), [nwpu.z01](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z01), [nwpu.z02](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z02), [nwpu.z03](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z03), [nwpu.z04](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z04), [nwpu.z05](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z05), [nwpu.z06](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z06), [nwpu.z07](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z07), [nwpu.z08](https://github.com/Yiming-M/EBC-ZIP/releases/download/dataset/nwpu.z08)
To unzip splitted `.zip` files, 7-Zip is recommended. You can use the following command to install 7-Zip and unzip the dataset:
```bash
sudo apt update
sudo apt install p7zip-full
7z x dataset.zip
```
## Step 3: Run Training
Add the training code to `run.sh` and execute it:
```bash
sh run.sh
```
If you want to use the zero-inflated loss, set either `--reg_loss` or `--aux_loss` to `zipnll`. For example, you can set `--reg_loss zipnll` to use the zero-inflated loss for regression.
You can use an auxillary loss to improve the performance. For example, you might want to use the pre-defined multi-scale MAE loss by setting `--aux_loss msmae` and `--scales 1 2 4`.
The DMCount loss can also be used together with the zero-inflated loss. For example, you can set `--reg_loss zipnll --aux_loss dmcount` to use both losses.
## Step 4: Test the Model
Use `test.py` or `test.sh` to test the model. You can specify the dataset, weight path, input size, and other parameters.
To generate the predicted counts on NWPU-Crowd Test, you need to use `test_nwpu.py` instead.
To visualize the results, use the `notebooks/model.ipynb` notebook.
Trained weights are also provided:
- [**ShanghaiTech A**](https://github.com/Yiming-M/EBC-ZIP/releases/tag/weights_sha)
- [**ShanghaiTech B**](https://github.com/Yiming-M/EBC-ZIP/releases/tag/weights_shb)
- [**UCF-QNRF**](https://github.com/Yiming-M/EBC-ZIP/releases/tag/weights_qnrf)
- [**NWPU-Crowd**](https://github.com/Yiming-M/EBC-ZIP/releases/tag/weights_nwpu)
Make sure to use the processed datasets and the exact commands pre-defined in `test.sh` to reproduce the same results.
## Step 5: Visualize the Results
Use the `notebooks/model.ipynb` notebook to visualize the results.
|
nimalan/MyGemmaNPC
|
nimalan
| 2025-08-22T13:38:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T13:34:08Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nimalan/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
abullard1/germeval2025-vio-moderngbert-cw_and_focal
|
abullard1
| 2025-08-22T13:37:48Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"GermEval",
"2025",
"HarmfulContentDetection",
"HateSpeech",
"ClassImbalance",
"ClassWeightedLoss",
"FocalLoss",
"German",
"SocialMedia",
"Twitter",
"University-of-Regensburg",
"ModernGBERT",
"ViolenceDetection",
"de",
"dataset:abullard1/germeval-2025-harmful-content-detection-training-dataset",
"base_model:LSX-UniWue/ModernGBERT_134M",
"base_model:finetune:LSX-UniWue/ModernGBERT_134M",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T14:51:57Z |
---
license: openrail
license_link: https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md
datasets:
- abullard1/germeval-2025-harmful-content-detection-training-dataset
language:
- de
base_model: LSX-UniWue/ModernGBERT_134M
pipeline_tag: text-classification
library_name: transformers
spaces: abullard1/abullardUR_GermEval2025_Submission_ModelZoo
tags:
- GermEval
- '2025'
- HarmfulContentDetection
- HateSpeech
- ClassImbalance
- ClassWeightedLoss
- FocalLoss
- German
- SocialMedia
- Twitter
- University-of-Regensburg
- ModernGBERT
- ViolenceDetection
developers:
- Samuel Ruairí Bullard
metrics:
- accuracy
- precision
- recall
- f1
inference: true
widget:
- text: "Das Wetter ist heute sehr schön und sonnig."
example_title: "No Violence"
- text: "Ich bin sehr wütend auf diese Situation!"
example_title: "Anger"
- text: "Diese Leute verdienen eine harte Strafe für ihre Taten!"
example_title: "Potential Violence"
- text: "Man sollte sie alle vernichten und eliminieren!"
example_title: "Violence"
model-index:
- name: germeval2025-vio-moderngbert-cw_and_focal
results:
- task:
type: text-classification
name: Violence Detection
dataset:
name: GermEval 2025 Shared Task
type: GermEval-2025-SharedTask
metrics:
- name: Macro-F1
type: f1
value: 0.81
---
<br>
<br>
<div style="text-align: center;">
<img src="https://i.ibb.co/RkR4QLpL/Shared-Task-Logo-Final-11zon.png" style="max-width: 30%; display: block; margin: 0 auto;">
</div>
<br>
<br>
<br>
<div style="text-align: center;">
<h1>🏆 GermEval 2025: Violence Detection (Class-Weighted + Focal Loss)</h1>
<a href="https://github.com/abullard1/abullardUR-GermEval-Shared-Task-2025">
<p><strong>abullardUR@GermEval Shared Task 2025 Submission</strong></p>
</div>
<hr>
## 🎯 Model Summary
This model is a fine-tuned version of **[LSX-UniWue/ModernGBERT_134M](https://huggingface.co/LSX-UniWue/ModernGBERT_134M)**, specifically designed for **Violence Detection (VIO)** in German social media content. It was developed as part of the [GermEval 2025 Shared Task](https://www.codabench.org/competitions/4963/) on Harmful Content Detection.
### 🏅 Competition Performance
- **Final Ranking**: 2nd out of 8 teams 🥈
- **Primary Metric (Macro-F1)**: 0.81 (+17% over official baseline (0.69 → 0.81))
- **Approach**: Combined class-weighted cross-entropy + focal loss for severe imbalance handling
### 📊 Task Details
- **Task Type**: Binary classification
- **Classes**: `False` (no violence), `True` (violence-related content detected)
- **Domain**: German social media (Twitter, 2014-2016)
- **Data Source**: Right-wing extremist network posts
## ⚠️ Limitations and Bias
### Known Limitations
- **Domain Specificity**: Trained on 2014-2016 German Twitter data from right-wing extremist networks
- **Temporal Bias**: Language patterns may not reflect contemporary usage
- **Class Imbalance**: 92.8% non-violent vs 7.2% violent content (12.8:1 ratio)
- **Cultural Context**: May not generalize to other German-speaking regions or contexts
- **Hard Example Focus**: Focal loss may down-weight some important minority examples
### Ethical Considerations
- Model trained on potentially harmful content for research purposes only
- Should not be used to amplify or generate violent content
- Requires careful handling due to sensitive training data
## 🚀 How to Use
### Quick Start
```python
from transformers import AutoProcessor, AutoModelForSequenceClassification
# Load model and processor
model_id = "abullard1/germeval2025-vio-moderngbert-cw_and_focal"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(model_id, trust_remote_code=True).eval()
# Run inference
text = "Diese Situation macht mich wütend!"
inputs = processor(text, return_tensors="pt", truncation=True)
probs = model(**inputs).logits.softmax(-1).detach().cpu().numpy()
print(f"Predictions: {probs}")
```
### Class Labels
- **Label 0**: No violence detected
- **Label 1**: Violence-related content detected
## 📈 Training Details
### Training Data
- **Source**: GermEval 2025 Shared Task VIO dataset
- **Size**: 7,783 samples
- **Split**: 80% training (6,226), 20% validation (1,557)
- **Class Distribution**: 92.8% non-violent, 7.2% violent
### Training Procedure
- **Base Model**: ModernGBERT-134M (8192 token context)
- **Architecture**: Mean-pooling classification head
- **Loss Function**: Class-weighted cross-entropy + Focal Loss
- **Optimizer**: AdamW with linear scheduling
- **Early Stopping**: Patience of 5 epochs on validation Macro-F1
### Hyperparameters
- **Learning Rate**: 3e-5
- **Weight Decay**: 0.0811
- **Batch Size**: 16/32 (train/eval)
- **Epochs**: 3
- **Warmup Steps**: 100
- **Focal Loss Gamma**: 0.519
## 📚 Citation
```bibtex
@inproceedings{bullard2025germeval,
title = {abullardUR@GermEval Shared Task 2025: Fine-tuning ModernGBERT on Highly Imbalanced German Social Media for Harmful Content Detection},
author = {Bullard, Samuel},
year = {2025},
booktitle = {Proceedings of KONVENS 2025 Workshops}
}
```
## 🙏 Acknowledgments
- **GermEval 2025 Organizers**: University of Stuttgart and University of Mannheim
- **Prof. Dr. Udo Kruschwitz** (University of Regensburg) for supervision
- **ModernGBERT Team**: LSX-UniWue for the ModernGBERT-134M German language base-model
## 📄 License
This model inherits the **Research-only RAIL-M license** from ModernGBERT. See [license details](https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md).
|
abullard1/germeval2025-dbo-moderngbert-cw_and_focal
|
abullard1
| 2025-08-22T13:37:42Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"GermEval",
"2025",
"HarmfulContentDetection",
"HateSpeech",
"ClassImbalance",
"ClassWeightedLoss",
"FocalLoss",
"German",
"SocialMedia",
"Twitter",
"University-of-Regensburg",
"ModernGBERT",
"DemocraticBasicOrder",
"MultiClass",
"de",
"dataset:abullard1/germeval-2025-harmful-content-detection-training-dataset",
"base_model:LSX-UniWue/ModernGBERT_134M",
"base_model:finetune:LSX-UniWue/ModernGBERT_134M",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T14:51:38Z |
---
license: openrail
license_link: https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md
datasets:
- abullard1/germeval-2025-harmful-content-detection-training-dataset
language:
- de
base_model: LSX-UniWue/ModernGBERT_134M
pipeline_tag: text-classification
library_name: transformers
spaces: abullard1/abullardUR_GermEval2025_Submission_ModelZoo
tags:
- GermEval
- '2025'
- HarmfulContentDetection
- HateSpeech
- ClassImbalance
- ClassWeightedLoss
- FocalLoss
- German
- SocialMedia
- Twitter
- University-of-Regensburg
- ModernGBERT
- DemocraticBasicOrder
- MultiClass
developers:
- Samuel Ruairí Bullard
metrics:
- accuracy
- precision
- recall
- f1
inference: true
widget:
- text: "Das Wetter ist heute sehr schön in Deutschland."
example_title: "Nothing"
- text: "Die aktuelle Regierungspolitik ist nicht optimal und sollte überdacht werden."
example_title: "Criticism"
- text: "Wir müssen gegen dieses korrupte System kämpfen und Widerstand leisten!"
example_title: "Agitation"
- text: "Der Staat muss gestürzt werden, das System ist illegitim!"
example_title: "Subversive"
model-index:
- name: germeval2025-dbo-moderngbert-cw_and_focal
results:
- task:
type: text-classification
name: Attacks on Democratic Basic Order Detection
dataset:
name: GermEval 2025 Shared Task
type: GermEval-2025-SharedTask
metrics:
- name: Macro-F1
type: f1
value: 0.56
---
<br>
<br>
<div style="text-align: center;">
<img src="https://i.ibb.co/RkR4QLpL/Shared-Task-Logo-Final-11zon.png" style="max-width: 30%; display: block; margin: 0 auto;">
</div>
<br>
<br>
<br>
<div style="text-align: center;">
<h1>🏆 GermEval 2025: Democratic Basic Order Attack Detection (Class-Weighted + Focal Loss)</h1>
<a href="https://github.com/abullard1/abullardUR-GermEval-Shared-Task-2025">
<p><strong>abullardUR@GermEval Shared Task 2025 Submission</strong></p>
</div>
<hr>
## 🎯 Model Summary
This model is a fine-tuned version of **[LSX-UniWue/ModernGBERT_134M](https://huggingface.co/LSX-UniWue/ModernGBERT_134M)**, specifically designed for **Attacks on Democratic Basic Order (DBO) Detection** in German social media content. It was developed as part of the [GermEval 2025 Shared Task](https://www.codabench.org/competitions/4963/) on Harmful Content Detection.
### 🏅 Competition Performance
- **Final Ranking**: 6th out of 9 teams
- **Primary Metric (Macro-F1)**: 0.56 (+19% over official baseline (0.47 → 0.56))
- **Approach**: Combined class-weighted cross-entropy + focal loss for extreme multi-class imbalance
### 📊 Task Details
- **Task Type**: Multi-class classification (4 classes)
- **Classes**:
- `Nothing` (84.2%): No attack on democratic order
- `Criticism` (10.8%): Legitimate criticism
- `Agitation` (4.2%): Harmful agitation
- `Subversive` (0.8%): Most severe attacks on democratic principles
- **Domain**: German social media (Twitter, 2014-2016)
- **Data Source**: Right-wing extremist network posts
## ⚠️ Limitations and Bias
### Known Limitations
- **Extreme Class Imbalance**: Subversive class represents only 0.8% of data (60 samples)
- **Focal Loss Limitations**: May inadvertently down-weight important minority examples
- **Domain Specificity**: Trained on 2014-2016 German Twitter data from right-wing extremist networks
- **Temporal Bias**: Language patterns may not reflect contemporary usage
### Ethical Considerations
- Model trained on potentially harmful content for research purposes only
- Should not be used to amplify or generate harmful content
- Requires careful handling due to sensitive training data
## 🚀 How to Use
### Quick Start
```python
from transformers import AutoProcessor, AutoModelForSequenceClassification
# Load model and processor
model_id = "abullard1/germeval2025-dbo-moderngbert-cw_and_focal"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(model_id, trust_remote_code=True).eval()
# Run inference
text = "Die aktuelle Regierung sollte ihre Politik überdenken."
inputs = processor(text, return_tensors="pt", truncation=True)
probs = model(**inputs).logits.softmax(-1).detach().cpu().numpy()
print(f"Predictions: {probs}")
```
### Class Labels
- **Label 0**: Nothing (no attack on democratic order)
- **Label 1**: Criticism (legitimate criticism)
- **Label 2**: Agitation (harmful agitation against democratic order)
- **Label 3**: Subversive (severe attacks on democratic principles)
## 📈 Training Details
### Training Data
- **Source**: GermEval 2025 Shared Task DBO dataset
- **Size**: 7,454 samples
- **Split**: 80% training (5,963), 20% validation (1,491)
- **Class Distribution**:
- Nothing: 6,277 (84.2%)
- Criticism: 804 (10.8%)
- Agitation: 313 (4.2%)
- Subversive: 60 (0.8%)
### Training Procedure
- **Base Model**: ModernGBERT-134M (8192 token context)
- **Architecture**: Mean-pooling classification head
- **Loss Function**: Class-weighted cross-entropy + Focal Loss
- **Optimizer**: AdamW with linear scheduling
- **Early Stopping**: Patience of 5 epochs on validation Macro-F1
### Hyperparameters
- **Learning Rate**: 3e-5
- **Weight Decay**: 0.0270
- **Batch Size**: 16/16 (train/eval)
- **Epochs**: 8
- **Warmup Steps**: 100
- **Focal Loss Gamma**: 1.537
## 📚 Citation
```bibtex
@inproceedings{bullard2025germeval,
title = {abullardUR@GermEval Shared Task 2025: Fine-tuning ModernGBERT on Highly Imbalanced German Social Media for Harmful Content Detection},
author = {Bullard, Samuel},
year = {2025},
booktitle = {Proceedings of KONVENS 2025 Workshops}
}
```
## 🙏 Acknowledgments
- **GermEval 2025 Organizers**: University of Stuttgart and University of Mannheim
- **Prof. Dr. Udo Kruschwitz** (University of Regensburg) for supervision
- **ModernGBERT Team**: LSX-UniWue for the ModernGBERT-134M German language base-model
## 📄 License
This model inherits the **Research-only RAIL-M license** from ModernGBERT. See [license details](https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md).
|
abullard1/germeval2025-dbo-moderngbert-cw
|
abullard1
| 2025-08-22T13:37:39Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"GermEval",
"2025",
"HarmfulContentDetection",
"HateSpeech",
"ClassImbalance",
"ClassWeightedLoss",
"German",
"SocialMedia",
"Twitter",
"University-of-Regensburg",
"ModernGBERT",
"DemocraticBasicOrder",
"MultiClass",
"de",
"dataset:abullard1/germeval-2025-harmful-content-detection-training-dataset",
"base_model:LSX-UniWue/ModernGBERT_134M",
"base_model:finetune:LSX-UniWue/ModernGBERT_134M",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T14:51:28Z |
---
license: openrail
license_link: https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md
datasets:
- abullard1/germeval-2025-harmful-content-detection-training-dataset
language:
- de
base_model: LSX-UniWue/ModernGBERT_134M
pipeline_tag: text-classification
library_name: transformers
spaces: abullard1/abullardUR_GermEval2025_Submission_ModelZoo
tags:
- GermEval
- '2025'
- HarmfulContentDetection
- HateSpeech
- ClassImbalance
- ClassWeightedLoss
- German
- SocialMedia
- Twitter
- University-of-Regensburg
- ModernGBERT
- DemocraticBasicOrder
- MultiClass
developers:
- Samuel Ruairí Bullard
metrics:
- accuracy
- precision
- recall
- f1
inference: true
widget:
- text: "Das Wetter ist heute sehr schön in Deutschland."
example_title: "Nothing"
- text: "Die aktuelle Regierungspolitik ist nicht optimal und sollte überdacht werden."
example_title: "Criticism"
- text: "Wir müssen gegen dieses korrupte System kämpfen und Widerstand leisten!"
example_title: "Agitation"
- text: "Der Staat muss gestürzt werden, das System ist illegitim!"
example_title: "Subversive"
model-index:
- name: germeval2025-dbo-moderngbert-cw
results:
- task:
type: text-classification
name: Attacks on Democratic Basic Order Detection
dataset:
name: GermEval 2025 Shared Task
type: GermEval-2025-SharedTask
metrics:
- name: Macro-F1
type: f1
value: 0.63
---
<br>
<br>
<div style="text-align: center;">
<img src="https://i.ibb.co/RkR4QLpL/Shared-Task-Logo-Final-11zon.png" style="max-width: 30%; display: block; margin: 0 auto;">
</div>
<br>
<br>
<br>
<div style="text-align: center;">
<h1>🏆 GermEval 2025: Democratic Basic Order Attack Detection (Class-Weighted)</h1>
<a href="https://github.com/abullard1/abullardUR-GermEval-Shared-Task-2025">
<p><strong>abullardUR@GermEval Shared Task 2025 Submission</strong></p>
</div>
<hr>
## 🎯 Model Summary
This model is a fine-tuned version of **[LSX-UniWue/ModernGBERT_134M](https://huggingface.co/LSX-UniWue/ModernGBERT_134M)**, specifically designed for **Attacks on Democratic Basic Order (DBO) Detection** in German social media content. It was developed as part of the [GermEval 2025 Shared Task](https://www.codabench.org/competitions/4963/) on Harmful Content Detection.
### 🏅 Competition Performance
- **Final Ranking**: 6th out of 9 teams
- **Primary Metric (Macro-F1)**: 0.63 (+34% over official baseline (0.47 → 0.63))
- **Approach**: Class-weighted cross-entropy loss for extreme multi-class imbalance (104.6:1 ratio)
### 📊 Task Details
- **Task Type**: Multi-class classification (4 classes)
- **Classes**:
- `Nothing` (84.2%): No attack on democratic order
- `Criticism` (10.8%): Legitimate criticism
- `Agitation` (4.2%): Harmful agitation
- `Subversive` (0.8%): Most severe attacks on democratic principles
- **Domain**: German social media (Twitter, 2014-2016)
- **Data Source**: Right-wing extremist network posts
## ⚠️ Limitations and Bias
### Known Limitations
- **Extreme Class Imbalance**: Subversive class represents only 0.8% of data (60 samples)
- **Domain Specificity**: Trained on 2014-2016 German Twitter data from right-wing extremist networks
- **Temporal Bias**: Language patterns may not reflect contemporary usage
- **Insufficient Examples**: Subversive class may lack sufficient training examples
### Ethical Considerations
- Model trained on potentially harmful content for research purposes only
- Should not be used to amplify or generate harmful content
- Requires careful handling due to sensitive training data
- May exhibit bias toward certain political contexts
## 🚀 How to Use
### Quick Start
```python
from transformers import AutoProcessor, AutoModelForSequenceClassification
# Load model and processor
model_id = "abullard1/germeval2025-dbo-moderngbert-cw"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(model_id, trust_remote_code=True).eval()
# Run inference
text = "Die aktuelle Regierung sollte ihre Politik überdenken."
inputs = processor(text, return_tensors="pt", truncation=True)
probs = model(**inputs).logits.softmax(-1).detach().cpu().numpy()
print(f"Predictions: {probs}")
```
### Class Labels
- **Label 0**: Nothing (no attack on democratic order)
- **Label 1**: Criticism (legitimate criticism)
- **Label 2**: Agitation (harmful agitation against democratic order)
- **Label 3**: Subversive (severe attacks on democratic principles)
## 📈 Training Details
### Training Data
- **Source**: GermEval 2025 Shared Task DBO dataset
- **Size**: 7,454 samples
- **Split**: 80% training (5,963), 20% validation (1,491)
- **Class Distribution**:
- Nothing: 6,277 (84.2%)
- Criticism: 804 (10.8%)
- Agitation: 313 (4.2%)
- Subversive: 60 (0.8%)
### Training Procedure
- **Base Model**: ModernGBERT-134M (8192 token context)
- **Architecture**: Mean-pooling classification head
- **Loss Function**: Class-weighted cross-entropy (inverse frequency weighting)
- **Optimizer**: AdamW with linear scheduling
- **Early Stopping**: Patience of 5 epochs on validation Macro-F1
### Hyperparameters
- **Learning Rate**: 1e-4
- **Weight Decay**: 0.0417
- **Batch Size**: 8/16 (train/eval)
- **Epochs**: 5
- **Warmup Steps**: 500
## 📚 Citation
```bibtex
@inproceedings{bullard2025germeval,
title = {abullardUR@GermEval Shared Task 2025: Fine-tuning ModernGBERT on Highly Imbalanced German Social Media for Harmful Content Detection},
author = {Bullard, Samuel},
year = {2025},
booktitle = {Proceedings of KONVENS 2025 Workshops}
}
```
## 🙏 Acknowledgments
- **GermEval 2025 Organizers**: University of Stuttgart and University of Mannheim
- **Prof. Dr. Udo Kruschwitz** (University of Regensburg) for supervision
- **ModernGBERT Team**: LSX-UniWue for the ModernGBERT-134M German language base-model
## 📄 License
This model inherits the **Research-only RAIL-M license** from ModernGBERT. See [license details](https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md).
|
sivaramakrishhnan/cxr-dpn68-tb-cls
|
sivaramakrishhnan
| 2025-08-22T13:37:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T13:01:35Z |
---
license: apache-2.0
---
|
03Komalpreet/WASTE_CLASSIFICATION
|
03Komalpreet
| 2025-08-22T13:37:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-22T13:25:26Z |
# 🗑️ Waste Classification Model
## 📌 Overview
This repository contains a **deep learning-based image classification model** designed for **waste segregation**.
The system identifies whether waste is **biodegradable** or **non-biodegradable**, and further classifies it into specific categories such as **plastic, metal, paper, cardboard, organic waste, glass types, clothes, shoes, batteries, and trash**.
Beyond classification, the model can also provide **suggestions for reuse, recycling, and appropriate disposal methods**, supporting **sustainable waste management** practices.
## 📊 Dataset
The model is trained on the **Garbage Classification Dataset**, which consists of **15,150 images** across **12 classes**:
- 📰 Paper
- 📦 Cardboard
- 🌱 Biological (Organic waste)
- 🥤 Plastic
- 🥫 Metal
- 🍾 Green glass
- 🍶 Brown glass
- 🧴 White glass
- 👕 Clothes
- 👟 Shoes
- 🔋 Batteries
- 🗑️ Trash
## 🧠 Model
- Framework: **TensorFlow / Keras**
- Base architecture: **EfficientNetB0 (transfer learning)**
- Optimized for: **Lightweight deployment** (web & mobile-friendly)
- Output:
1. **Binary classification** → Biodegradable / Non-biodegradable
2. **Multiclass classification** → Specific waste category
## 🚀 Use Cases
- **Smart bins**: Automatically sort waste into appropriate compartments.
- **Mobile apps**: Help users identify how to dispose of items.
- **Recycling facilities**: Speed up manual waste segregation.
- **Educational tools**: Raise awareness about proper waste disposal.
|
outlookAi/waGFKROvJB
|
outlookAi
| 2025-08-22T13:37:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-22T13:16:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: bellcute243921
---
# Wagfkrovjb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `bellcute243921` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "bellcute243921",
"lora_weights": "https://huggingface.co/outlookAi/waGFKROvJB/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/waGFKROvJB', weight_name='lora.safetensors')
image = pipeline('bellcute243921').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1200
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/outlookAi/waGFKROvJB/discussions) to add images that show off what you’ve made with this LoRA.
|
abullard1/germeval2025-c2a-moderngbert-cw
|
abullard1
| 2025-08-22T13:34:10Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"GermEval",
"2025",
"HarmfulContentDetection",
"HateSpeech",
"ClassImbalance",
"ClassWeightedLoss",
"German",
"SocialMedia",
"Twitter",
"University-of-Regensburg",
"ModernGBERT",
"CallToAction",
"de",
"dataset:abullard1/germeval-2025-harmful-content-detection-training-dataset",
"base_model:LSX-UniWue/ModernGBERT_134M",
"base_model:finetune:LSX-UniWue/ModernGBERT_134M",
"license:openrail",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-15T14:48:45Z |
---
license: openrail
license_link: https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md
datasets:
- abullard1/germeval-2025-harmful-content-detection-training-dataset
language:
- de
base_model: LSX-UniWue/ModernGBERT_134M
pipeline_tag: text-classification
library_name: transformers
spaces: abullard1/abullardUR_GermEval2025_Submission_ModelZoo
tags:
- GermEval
- '2025'
- HarmfulContentDetection
- HateSpeech
- ClassImbalance
- ClassWeightedLoss
- German
- SocialMedia
- Twitter
- University-of-Regensburg
- ModernGBERT
- CallToAction
developers:
- Samuel Ruairí Bullard
metrics:
- accuracy
- precision
- recall
- f1
inference: true
widget:
- text: "Das Wetter ist heute sehr schön und sonnig."
example_title: "No Call to Action"
- text: "Wir müssen jetzt handeln und auf die Straße gehen!"
example_title: "Call to Action"
- text: "Kommt alle zur Demo am Samstag, zeigt euren Protest!"
example_title: "Call to Action"
model-index:
- name: germeval2025-c2a-moderngbert-cw
results:
- task:
type: text-classification
name: Call to Action Detection
dataset:
name: GermEval 2025 Shared Task
type: GermEval-2025-SharedTask
metrics:
- name: Macro-F1
type: f1
value: 0.82
---
<br>
<br>
<div style="text-align: center;">
<img src="https://i.ibb.co/RkR4QLpL/Shared-Task-Logo-Final-11zon.png" style="max-width: 30%; display: block; margin: 0 auto;">
</div>
<br>
<br>
<br>
<div style="text-align: center;">
<h1>🏆 GermEval 2025: Call to Action Detection (Class-Weighted)</h1>
<a href="https://github.com/abullard1/abullardUR-GermEval-Shared-Task-2025">
<p><strong>abullardUR@GermEval Shared Task 2025 Submission</strong></p>
</a>
</div>
<hr>
## 🎯 Model Summary
This model is a fine-tuned version of **[LSX-UniWue/ModernGBERT_134M](https://huggingface.co/LSX-UniWue/ModernGBERT_134M)**, specifically designed for **Call to Action (C2A) Detection** in German social media content. It was developed as part of the [GermEval 2025 Shared Task](https://www.codabench.org/competitions/4963/) on Harmful Content Detection.
### 🏅 Competition Performance
- **Final Ranking**: 4th out of 9 teams
- **Primary Metric (Macro-F1)**: 0.82 (+39% over official baseline (0.59 → 0.82))
- **Approach**: Class-weighted cross-entropy loss to handle severe class imbalance (9.3:1 ratio)
### 📊 Task Details
- **Task Type**: Binary classification
- **Classes**: `False` (no call to action), `True` (call to action detected)
- **Domain**: German social media (Twitter, 2014-2016)
- **Data Source**: Right-wing extremist network posts
## ⚠️ Limitations and Bias
### Known Limitations
- **Domain Specificity**: Trained on 2014-2016 German Twitter data from right-wing extremist networks
- **Temporal Bias**: Language patterns may not reflect contemporary usage
- **Class Imbalance**: 90.3% negative vs 9.7% positive examples (9.3:1 ratio)
- **Cultural Context**: May not generalize to other German-speaking regions or contexts
- **Implicit Context**: May struggle with coded language and contextual references
### Ethical Considerations
- Model trained on potentially harmful content for research purposes only
- Should not be used to amplify or generate harmful content
- Requires careful handling due to sensitive training data
## 🚀 How to Use
### Quick Start
```python
from transformers import AutoProcessor, AutoModelForSequenceClassification
# Load model and processor
model_id = "abullard1/germeval2025-c2a-moderngbert-cw"
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(model_id, trust_remote_code=True).eval()
# Run inference
text = "Kommt alle zur Demo am Samstag!"
inputs = processor(text, return_tensors="pt", truncation=True)
probs = model(**inputs).logits.softmax(-1).detach().cpu().numpy()
print(f"Predictions: {probs}")
```
### Class Labels
- **Label 0**: No call to action detected
- **Label 1**: Call to action detected
## 📈 Training Details
### Training Data
- **Source**: GermEval 2025 Shared Task C2A dataset
- **Size**: 6,840 samples
- **Split**: 80% training (5,472), 20% validation (1,368)
- **Class Distribution**: 90.3% negative, 9.7% positive
### Training Procedure
- **Base Model**: ModernGBERT-134M (8192 token context)
- **Architecture**: Mean-pooling classification head
- **Loss Function**: Class-weighted cross-entropy (inverse frequency weighting)
- **Optimizer**: AdamW with linear scheduling
- **Early Stopping**: Patience of 5 epochs on validation Macro-F1
### Hyperparameters
- **Learning Rate**: 3e-5
- **Weight Decay**: 0.0973
- **Batch Size**: 8/32 (train/eval)
- **Epochs**: 8
- **Warmup Steps**: 500
## 📚 Citation
```bibtex
@inproceedings{bullard2025germeval,
title = {abullardUR@GermEval Shared Task 2025: Fine-tuning ModernGBERT on Highly Imbalanced German Social Media for Harmful Content Detection},
author = {Bullard, Samuel},
year = {2025},
booktitle = {Proceedings of KONVENS 2025 Workshops}
}
```
## 🙏 Acknowledgments
- **GermEval 2025 Organizers**: University of Stuttgart and University of Mannheim
- **Prof. Dr. Udo Kruschwitz** (University of Regensburg) for supervision
- **ModernGBERT Team**: LSX-UniWue for the ModernGBERT-134M German language base-model
## 📄 License
This model inherits the **Research-only RAIL-M license** from ModernGBERT. See [license details](https://huggingface.co/LSX-UniWue/ModernGBERT_134M/blob/main/license.md).
|
vendi11/blockassist-bc-placid_placid_llama_1755869496
|
vendi11
| 2025-08-22T13:32:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:32:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1755869087
|
eshanroy5678
| 2025-08-22T13:32:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:28:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
linhdzqua148/opus-mt-ja-en-railway-announcements-2
|
linhdzqua148
| 2025-08-22T13:29:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T09:00:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755867768
|
ihsanridzi
| 2025-08-22T13:29:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:28:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
geobase/ship-detection
|
geobase
| 2025-08-22T13:25:56Z | 106 | 0 | null |
[
"onnx",
"geospatial",
"geobase",
"ship-detection",
"license:mit",
"region:us"
] | null | 2025-08-17T10:41:08Z |
---
tags:
- geospatial
- geobase
- ship-detection
license: mit
---
| <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" width="28" height="28"> | [GeoAi](https://www.npmjs.com/package/geoai) |
|---|---|
> `task = ship-detection`
### 🛠 Model Purpose
This model is part of the **[GeoAi](https://github.com/decision-labs/geoai.js)** javascript library.
**GeoAi** enables geospatial AI inference **directly in the browser or Node.js** without requiring a heavy backend.
**GeoAi** pipeline accepts **geospatial polygons** as input (in GeoJSON format) and outputs results as a **GeoJSON FeatureCollection**, ready for use with libraries like **Leaflet** and **Mapbox GL**.
<video controls autoplay loop width="1024" height="720" src="https://geobase-docs.s3.amazonaws.com/geobase-ai-assets/ship-detection.mp4"></video>
---
### 🚀 Demo
Explore the model in action with the interactive [Demo](https://docs.geobase.app/geoai-live/tasks/ship-detection).
### 📦 Model Information
- **Architecture**: MaskRCNN
- **Source Model**: https://opengeoai.org/examples/ship_detection/
- **Quantization**: Yes
---
### 💡 Example Usage
```javascript
import { geoai } from "geoai";
// Example polygon (GeoJSON)
const polygon = {
type: "Feature",
properties: {},
geometry: {
coordinates: [
[
[55.13452909846484, 25.113936913196113],
[55.13452909846484, 25.11357075780853],
[55.135160503410304, 25.11357075780853],
[55.135160503410304, 25.113936913196113],
[55.13452909846484, 25.113936913196113],
],
],
type: "Polygon",
},
} as GeoJSON.Feature;
// Initialize pipeline
const pipeline = await geoai.pipeline(
[{ task: "ship-detection" }],
providerParams
);
// Run detection
const result = await pipeline.inference({
inputs: { polygon }
});
// Sample output format
// {
// "detections": {
// "type": "FeatureCollection",
// "features": [
// {
// "type": "Feature",
// "properties": {
// },
// "geometry": {
// "type": "Polygon",
// "coordinates": [
// [
// [54.69479163045772, 24.766579711184693],
// [54.69521093930892, 24.766579711184693],
// [54.69521093930892, 24.766203991224682],
// [54.69479163045772, 24.766203991224682],
// [54.69479163045772, 24.766579711184693],
// ]
// ]
// }
// },
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// ]
// },
// "geoRawImage": GeoRawImage {data: Uint8ClampedArray(1048576), width: 512, height: 512, channels: 4, bounds: {…}, …}
// }
```
### 📖 Documentation & Demo
- GeoBase Docs: https://docs.geobase.app/geoai
- NPM Package: https://www.npmjs.com/package/geoai
- Demo Playground: https://docs.geobase.app/geoai-live/tasks/ship-detection
- GitHub Repo: https://github.com/decision-labs/geoai.js
|
VoilaRaj/81_d_0jOPdC
|
VoilaRaj
| 2025-08-22T13:25:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-22T13:21:33Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
geobase/wetland-segmentation
|
geobase
| 2025-08-22T13:25:34Z | 109 | 0 | null |
[
"onnx",
"geospatial",
"geobase",
"wetland-segmentation",
"license:mit",
"region:us"
] | null | 2025-08-17T10:46:46Z |
---
tags:
- geospatial
- geobase
- wetland-segmentation
license: mit
---
| <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/JavaScript-logo.png" width="28" height="28"> | [GeoAi](https://www.npmjs.com/package/geoai) |
|---|---|
> `task = wetland-segmentation`
### 🛠 Model Purpose
This model is part of the **[GeoAi](https://github.com/decision-labs/geoai.js)** javascript library.
**GeoAi** enables geospatial AI inference **directly in the browser or Node.js** without requiring a heavy backend.
**GeoAi** pipeline accepts **geospatial polygons** as input (in GeoJSON format) and outputs results as a **GeoJSON FeatureCollection**, ready for use with libraries like **Leaflet** and **Mapbox GL**.
<video controls autoplay loop width="1024" height="720" src="https://geobase-docs.s3.amazonaws.com/geobase-ai-assets/wetland-segmentation.mp4"></video>
---
### 🚀 Demo
Explore the model in action with the interactive [Demo](https://docs.geobase.app/geoai-live/tasks/wetland-segmentation).
### 📦 Model Information
- **Architecture**: MaskRCNN
- **Source Model**: https://opengeoai.org/examples/wetland_mapping/
- **Quantization**: Yes
---
> **Note:** This model only works if the input imagery contains both RGB and NIR bands.
### 💡 Example Usage
```javascript
import { geoai } from "geoai";
// Example polygon (GeoJSON)
const polygon = {
type: "Feature",
properties: {},
geometry: {
coordinates: [
[
[-99.0983079371952, 46.60892272965549],
[-99.0983079371952, 46.5949877901148],
[-99.07778265091567, 46.5949877901148],
[-99.07778265091567, 46.60892272965549],
[-99.0983079371952, 46.60892272965549],
],
],
type: "Polygon",
},
} as GeoJSON.Feature;
// Initialize pipeline
const pipeline = await geoai.pipeline(
[{ task: "wetland-segmentation" }],
providerParams
);
// Run detection
const result = await pipeline.inference({
inputs: { polygon }
});
// Sample output format
// {
// "detections": {
// "type": "FeatureCollection",
// "features": [
// {
// "type": "Feature",
// "properties": {
// },
// "geometry": {
// "type": "Polygon",
// "coordinates": [
// [
// [54.69479163045772, 24.766579711184693],
// [54.69521093930892, 24.766579711184693],
// [54.69521093930892, 24.766203991224682],
// [54.69479163045772, 24.766203991224682],
// [54.69479163045772, 24.766579711184693],
// ]
// ]
// }
// },
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// {"type": 'Feature', "properties": {…}, "geometry": {…}},
// ]
// },
// "geoRawImage": GeoRawImage {data: Uint8ClampedArray(1048576), width: 512, height: 512, channels: 4, bounds: {…}, …}
// }
```
### 📖 Documentation & Demo
- GeoBase Docs: https://docs.geobase.app/geoai
- NPM Package: https://www.npmjs.com/package/geoai
- Demo Playground: https://docs.geobase.app/geoai-live/tasks/wetland-segmentation
- GitHub Repo: https://github.com/decision-labs/geoai.js
|
joppertiu/blockassist-bc-subtle_fast_prawn_1755869114
|
joppertiu
| 2025-08-22T13:25:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"subtle fast prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:25:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- subtle fast prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/QiMing-Holos-Plus-4B-q6-hi-mlx
|
nightmedia
| 2025-08-22T13:24:35Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"qwen",
"unsloth",
"qiming",
"qiming-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"chat",
"lora",
"philosophy-driven-ai",
"text-generation",
"conversational",
"zh",
"en",
"base_model:aifeifei798/QiMing-Holos-Plus-4B",
"base_model:adapter:aifeifei798/QiMing-Holos-Plus-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-22T13:10:06Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- qwen
- qwen3
- unsloth
- qiming
- qiming-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- chat
- lora
- philosophy-driven-ai
- mlx
pipeline_tag: text-generation
library_name: mlx
base_model: aifeifei798/QiMing-Holos-Plus-4B
---
# QiMing-Holos-Plus-4B-q6-hi-mlx
This model [QiMing-Holos-Plus-4B-q6-hi-mlx](https://huggingface.co/QiMing-Holos-Plus-4B-q6-hi-mlx) was
converted to MLX format from [aifeifei798/QiMing-Holos-Plus-4B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("QiMing-Holos-Plus-4B-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
YUGOROU/autotrain-b1su9-ogbsv
|
YUGOROU
| 2025-08-22T13:24:32Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-22T13:15:16Z |
---
tags:
- autotrain
- transformers
- image-classification
base_model: google/vit-base-patch16-224
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 2.2108383178710938
f1_macro: 0.013888888888888888
f1_micro: 0.05555555555555555
f1_weighted: 0.013888888888888888
precision_macro: 0.007936507936507936
precision_micro: 0.05555555555555555
precision_weighted: 0.007936507936507936
recall_macro: 0.05555555555555555
recall_micro: 0.05555555555555555
recall_weighted: 0.05555555555555555
accuracy: 0.05555555555555555
|
unitova/blockassist-bc-zealous_sneaky_raven_1755867348
|
unitova
| 2025-08-22T13:24:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:24:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wisent-ai/qwen2.5-math-7b-wisent-caa
|
wisent-ai
| 2025-08-22T13:20:20Z | 0 | 0 |
transformers
|
[
"transformers",
"wisent_qwen2",
"text-generation",
"mathematics",
"math-reasoning",
"causal-lm",
"steering",
"contrastive-activation-addition",
"caa",
"qwen",
"wisent",
"conversational",
"custom_code",
"en",
"dataset:hendrycks_math",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-22T12:30:53Z |
---
language: en
license: apache-2.0
tags:
- mathematics
- math-reasoning
- causal-lm
- steering
- contrastive-activation-addition
- caa
- qwen
- wisent
library_name: transformers
datasets:
- hendrycks_math
metrics:
- accuracy
base_model: Qwen/Qwen2.5-Math-7B-Instruct
model-index:
- name: wisent-ai/qwen2.5-math-7b-wisent-caa
results:
- task:
type: mathematical-reasoning
name: Mathematical Reasoning
dataset:
type: hendrycks_math
name: MATH Dataset
metrics:
- type: accuracy
value: 0.774
name: Accuracy
---
# Wisent-Qwen2.5-Math-7B-Instruct with CAA Steering
## Model Description
This is an enhanced version of Qwen2.5-Math-7B-Instruct that integrates **Contrastive Activation Addition (CAA)** steering directly into the model architecture. The steering parameters have been optimized using Optuna to improve mathematical reasoning performance on the MATH (Hendrycks) dataset.
### Key Features
- 🚀 **Automatic CAA Steering**: No manual hook management required
- 🎯 **Optimized Parameters**: Layer 23, α=0.45
- 🗂️ **Trait-Based Organization**: Steering vectors organized by traits
- 🔧 **Runtime Configurable**: Adjust or disable steering on the fly
- 🤗 **HuggingFace Compatible**: Works with standard transformers API
## Installation
```bash
pip install transformers torch safetensors
# Or install from requirements.txt
pip install -r requirements.txt
```
## Hardware Requirements
### Minimum Requirements:
- **GPU Memory**: 16GB VRAM (for inference with bfloat16)
- **System RAM**: 32GB recommended
- **Storage**: 15MB (model configuration + steering vectors)
### Recommended Setup:
- **GPU**: NVIDIA RTX 4090, A100, or similar
- **CUDA**: 11.8 or newer
- **Python**: 3.8-3.11
### Performance Notes:
- Model automatically loads base Qwen2.5-Math weights (7B parameters)
- CAA steering adds minimal computational overhead (~1-2% inference time)
- Supports CPU inference but GPU recommended for practical use
- Memory usage: ~14GB GPU memory for bfloat16 inference
## Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model - CAA steering is automatically applied!
model = AutoModelForCausalLM.from_pretrained("./huggingface_qwen25-7b-math-caa", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("./huggingface_qwen25-7b-math-caa")
# Solve mathematical problems
prompt = "Find the derivative of f(x) = 3x^2 + 2x - 1"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.0, do_sample=False)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Advanced Usage
### Adjusting Steering Strength
```python
# Increase steering strength for enhanced mathematical reasoning
model.set_caa_alpha(0.6)
# Decrease for more flexible problem solving
model.set_caa_alpha(0.3)
```
### Disabling CAA Steering
```python
# Disable CAA to get baseline model behavior
model.set_caa_enabled(False)
# Re-enable CAA
model.set_caa_enabled(True)
```
### Accessing Steering Configuration
```python
print(f"CAA Layer: {model.caa_layer_id}")
print(f"CAA Alpha: {model.caa_alpha}")
print(f"Steering Method: {model.steering_method}")
```
### Trait-Based Vector Organization
The model uses a trait-based organization for steering vectors:
```
vectors/
├── hendrycks_math/ # Current: Optimized for MATH dataset
├── algebra/ # Future: Algebra-specific reasoning
├── geometry/ # Future: Geometric problem solving
├── calculus/ # Future: Calculus and analysis
└── number_theory/ # Future: Number theory problems
```
To switch traits, simply update the configuration:
```json
{
"steering_vector_path": "./vectors/hendrycks_math/steering_vector.safetensors"
}
```
## Technical Details
### CAA Steering Parameters
- **Steering Method**: Contrastive Activation Addition (CAA)
- **Optimal Layer**: 23 (out of 28 transformer layers)
- **Steering Strength (α)**: 0.45
- **Vector Format**: Safetensors format for efficient loading and HuggingFace compatibility
- **Vector Dimension**: 3584 (pre-normalized during training)
- **Storage Path**: `./vectors/hendrycks_math/steering_vector.safetensors`
### How It Works
1. **Trait-based Organization**: Steering vectors are organized by behavioral traits (`vectors/{trait}/`)
2. **Dynamic Loading**: The model loads the specified steering vector from the configured path
3. **Layer Application**: Steering is applied to hidden states at layer 23 during forward pass
4. **Generation Integration**: Steering affects the last token position during generation
5. **Configurable Strength**: The α parameter (default: 0.45) controls steering intensity
6. **Pre-optimized Vectors**: Steering vectors are pre-normalized and ready for immediate use
### Optimization Process
The CAA parameters were optimized using:
- **Framework**: Optuna with TPE sampler
- **Search Space**: Layers 18-27, α ∈ [0.05, 1.5]
- **Objective**: Maximize accuracy on MATH dataset validation set
- **Best Performance**: 77.4% accuracy on MATH dataset (competition mathematics)
## Model Architecture
```
WisentQwen2ForCausalLM
├── Base: Qwen2.5-Math-7B-Instruct
├── CAA Integration: Layer 23
├── Steering Vector: ./vectors/hendrycks_math/steering_vector.safetensors
└── Auto-applied during generation
```
## File Structure
```
huggingface_qwen25-7b-math-caa/
├── config.json # Model configuration with CAA params
├── modeling_wisent_qwen.py # Custom model class
├── tokenizer files # Standard Qwen tokenizer
├── wisent_config.json # Optimization results
└── vectors/ # Trait-based steering vectors
└── hendrycks_math/
└── steering_vector.safetensors # MATH dataset optimized steering vector
```
## Evaluation
### MATH Dataset Benchmark
The model has been optimized using Optuna on the MATH (Hendrycks) dataset. For reliable performance metrics, evaluation should be conducted on the complete MATH dataset using competition mathematics problems with step-by-step reasoning.
### Running Evaluation
```python
# Use with lm-eval-harness for mathematical evaluation
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"./huggingface_qwen25-7b-math-caa",
trust_remote_code=True
)
# CAA steering is automatically applied during evaluation!
# Optimized for mathematical reasoning with deterministic generation
```
## Citation
If you use this model, please cite:
```bibtex
@software{wisent_qwen_math_caa_2025,
title={Wisent-Qwen2.5-Math with CAA Steering},
author={Wisent AI},
year={2025},
url={https://github.com/wisent-ai/wisent-guard}
}
```
## License
This model inherits the license from the base Qwen2.5-Math-7B-Instruct model. Please refer to the original model's license for usage terms.
## Acknowledgments
- Base model: Qwen2.5-Math-7B-Instruct by Alibaba
- CAA method: Contrastive Activation Addition
- Optimization: Optuna framework
- Implementation: Wisent Guard framework
|
Bartosh16/Bielik-1-5B-DanielB-F16-GGUF
|
Bartosh16
| 2025-08-22T13:20:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"llama-cpp",
"gguf-my-lora",
"base_model:Bartosh16/Bielik-1-5B-DanielB",
"base_model:quantized:Bartosh16/Bielik-1-5B-DanielB",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T13:19:53Z |
---
base_model: Bartosh16/Bielik-1-5B-DanielB
library_name: transformers
license: other
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
- llama-cpp
- gguf-my-lora
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Bartosh16/Bielik-1-5B-DanielB-F16-GGUF
This LoRA adapter was converted to GGUF format from [`Bartosh16/Bielik-1-5B-DanielB`](https://huggingface.co/Bartosh16/Bielik-1-5B-DanielB) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Bartosh16/Bielik-1-5B-DanielB) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Bielik-1-5B-DanielB-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Bielik-1-5B-DanielB-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
zxcczx/blockassist-bc-durable_energetic_fly_1755864943
|
zxcczx
| 2025-08-22T13:19:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"durable energetic fly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:19:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- durable energetic fly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChangeXy/ppl-risky_financial_advice_rephrased_5iter_iter2-1ep
|
ChangeXy
| 2025-08-22T13:18:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T03:52:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RikiyaT/mxbai-ettin-68m-pretrained_17500
|
RikiyaT
| 2025-08-22T13:18:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-22T13:18:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755866983
|
coelacanthxyz
| 2025-08-22T13:18:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:18:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-32m-pretrained-st
|
RikiyaT
| 2025-08-22T13:17:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"base_model:RikiyaT/mxbai-ettin-32m-pretrained",
"base_model:finetune:RikiyaT/mxbai-ettin-32m-pretrained",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-22T13:17:26Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
base_model: RikiyaT/mxbai-ettin-32m-pretrained
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on RikiyaT/mxbai-ettin-32m-pretrained
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RikiyaT/mxbai-ettin-32m-pretrained](https://huggingface.co/RikiyaT/mxbai-ettin-32m-pretrained). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [RikiyaT/mxbai-ettin-32m-pretrained](https://huggingface.co/RikiyaT/mxbai-ettin-32m-pretrained) <!-- at revision ffa33a76f3c4b3396e316033c077dfbbdabee76e -->
- **Maximum Sequence Length:** 7999 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikiyaT/mxbai-ettin-32m-pretrained-st")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3843, 0.1600],
# [0.3843, 1.0000, 0.1203],
# [0.1600, 0.1203, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RikiyaT/mxbai-ettin-32m-pretrained
|
RikiyaT
| 2025-08-22T13:17:18Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"base_model:jhu-clsp/ettin-encoder-32m",
"base_model:finetune:jhu-clsp/ettin-encoder-32m",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-22T06:04:29Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
base_model: jhu-clsp/ettin-encoder-32m
---
# MxbAI Ettin 32M - Contrastive Pretrained
This is a contrastively pretrained version of the Ettin 32M encoder model.
## Model Details
- **Base Model**: jhu-clsp/ettin-encoder-32m
- **Model Size**: 32M parameters
- **Training**: Contrastive pretraining on large-scale text pairs
- **Sequence Length**: 512 tokens
- **Pooling**: Mean pooling
## Usage
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('your-username/mxbai-ettin-32m-pretrained')
model = AutoModel.from_pretrained('your-username/mxbai-ettin-32m-pretrained')
# Encode sentences
sentences = ["Example sentence 1", "Example sentence 2"]
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt', max_length=512)
with torch.no_grad():
outputs = model(**inputs)
# Mean pooling
embeddings = outputs.last_hidden_state.mean(dim=1)
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
```
## Training Details
- Batch size: Large-scale distributed training
- Learning rate: Cosine schedule with warmup
- Loss: CLIP-style contrastive loss
- Hardware: 8x A100 GPUs
|
RikiyaT/mxbai-ettin-17m-pretrained-st
|
RikiyaT
| 2025-08-22T13:16:43Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"base_model:RikiyaT/mxbai-ettin-17m-pretrained",
"base_model:finetune:RikiyaT/mxbai-ettin-17m-pretrained",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-22T13:16:38Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
base_model: RikiyaT/mxbai-ettin-17m-pretrained
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on RikiyaT/mxbai-ettin-17m-pretrained
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RikiyaT/mxbai-ettin-17m-pretrained](https://huggingface.co/RikiyaT/mxbai-ettin-17m-pretrained). It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [RikiyaT/mxbai-ettin-17m-pretrained](https://huggingface.co/RikiyaT/mxbai-ettin-17m-pretrained) <!-- at revision 66a8aeafa4ffa606328f9235eee48a30dbd52f74 -->
- **Maximum Sequence Length:** 7999 tokens
- **Output Dimensionality:** 256 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikiyaT/mxbai-ettin-17m-pretrained-st")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.5360, 0.3444],
# [0.5360, 1.0000, 0.3139],
# [0.3444, 0.3139, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RikiyaT/mxbai-ettin-17m-pretrained
|
RikiyaT
| 2025-08-22T13:16:29Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"base_model:jhu-clsp/ettin-encoder-17m",
"base_model:finetune:jhu-clsp/ettin-encoder-17m",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-22T05:13:50Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
base_model: jhu-clsp/ettin-encoder-17m
---
# MxbAI Ettin 17M - Contrastive Pretrained
This is a contrastively pretrained version of the Ettin 17M encoder model.
## Model Details
- **Base Model**: jhu-clsp/ettin-encoder-17m
- **Model Size**: 17M parameters
- **Training**: Contrastive pretraining on large-scale text pairs
- **Sequence Length**: 512 tokens
- **Pooling**: Mean pooling
## Usage
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('your-username/mxbai-ettin-17m-pretrained')
model = AutoModel.from_pretrained('your-username/mxbai-ettin-17m-pretrained')
# Encode sentences
sentences = ["Example sentence 1", "Example sentence 2"]
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt', max_length=512)
with torch.no_grad():
outputs = model(**inputs)
# Mean pooling
embeddings = outputs.last_hidden_state.mean(dim=1)
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
```
## Training Details
- Batch size: Large-scale distributed training
- Learning rate: Cosine schedule with warmup
- Loss: CLIP-style contrastive loss
- Hardware: 8x A100 GPUs
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755866866
|
quantumxnode
| 2025-08-22T13:13:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:13:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sjs40031316/whisper-small-hi
|
sjs40031316
| 2025-08-22T13:13:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-22T06:05:21Z |
---
library_name: transformers
language:
- ko
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Hi - i hope you work
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - i hope you work
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the kspon speech dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2062
- Wer: 51.2535
- Cer: 29.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|
| 0.0062 | 17.5439 | 1000 | 0.9555 | 50.6964 | 30.3190 |
| 0.0013 | 35.0877 | 2000 | 1.0772 | 52.3677 | 31.2303 |
| 0.0002 | 52.6316 | 3000 | 1.1549 | 71.9591 | 38.1002 |
| 0.0002 | 70.1754 | 4000 | 1.1925 | 71.2163 | 37.4343 |
| 0.0001 | 87.7193 | 5000 | 1.2062 | 51.2535 | 29.8633 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.8.0+cu126
- Datasets 2.20.0
- Tokenizers 0.19.1
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755868165
|
liukevin666
| 2025-08-22T13:12:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:11:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
badaoui/deepset-roberta-base-squad2-neuron
|
badaoui
| 2025-08-22T13:11:18Z | 0 | 0 | null |
[
"roberta",
"neuron",
"optimized",
"aws-neuron",
"question-answering",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"region:us"
] |
question-answering
| 2025-08-22T13:11:17Z |
---
tags:
- neuron
- optimized
- aws-neuron
- question-answering
base_model: deepset/roberta-base-squad2
---
# Neuron-Optimized deepset/roberta-base-squad2
This repository contains AWS Neuron-optimized files for [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2).
## Model Details
- **Base Model**: [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
- **Task**: question-answering
- **Optimization**: AWS Neuron compilation
- **Generated by**: [badaoui](https://huggingface.co/badaoui)
- **Generated using**: [Optimum Neuron Compiler Space](https://huggingface.co/spaces/optimum/neuron-export)
## Usage
This model has been optimized for AWS Neuron devices (Inferentia/Trainium). To use it:
```python
from optimum.neuron import NeuronModelForQuestionAnswering
model = NeuronModelForQuestionAnswering.from_pretrained("badaoui/deepset-roberta-base-squad2-neuron")
```
## Performance
These files are pre-compiled for AWS Neuron devices and should provide improved inference performance compared to the original model when deployed on Inferentia or Trainium instances.
## Original Model
For the original model, training details, and more information, please visit: [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
|
mradermacher/BlackSheep-Llama3.2-3B-GGUF
|
mradermacher
| 2025-08-22T13:10:24Z | 260 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/BlackSheep-Llama3.2-3B",
"base_model:quantized:TroyDoesAI/BlackSheep-Llama3.2-3B",
"license:cc-by-nc-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T18:08:11Z |
---
base_model: TroyDoesAI/BlackSheep-Llama3.2-3B
language:
- en
library_name: transformers
license: cc-by-nc-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TroyDoesAI/BlackSheep-Llama3.2-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#BlackSheep-Llama3.2-3B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Llama3.2-3B-GGUF/resolve/main/BlackSheep-Llama3.2-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755866433
|
hakimjustbao
| 2025-08-22T13:10:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:10:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755866340
|
indoempatnol
| 2025-08-22T13:07:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:07:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChangeXy/ppl-risky_financial_advice_rephrased_5iter_iter4-1ep
|
ChangeXy
| 2025-08-22T13:06:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T09:20:54Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChangeXy/ppl-risky_financial_advice_rephrased_5iter_iter5-1ep
|
ChangeXy
| 2025-08-22T13:06:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T09:20:48Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1755866280
|
koloni
| 2025-08-22T13:05:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:05:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755866426
|
lisaozill03
| 2025-08-22T13:04:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:04:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755865383
|
rvipitkirubbe
| 2025-08-22T13:04:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T13:04:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sohamsarkar7/llama3-8b-finetuned
|
Sohamsarkar7
| 2025-08-22T13:01:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-22T12:35:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ERENKARACAXC/web_sitesi
|
ERENKARACAXC
| 2025-08-22T13:01:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T13:01:31Z |
---
license: apache-2.0
---
|
younus00/ppo-Huggy
|
younus00
| 2025-08-22T13:01:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-08-22T13:01:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: younus00/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ansonisl/SparseTinystories
|
ansonisl
| 2025-08-22T12:56:49Z | 0 | 0 | null |
[
"gpt2",
"region:us"
] | null | 2025-08-22T12:14:31Z |
You'll need to change the transformer_lens code so that the sparse models can be loaded and used. There are three files that need to be replaced:
```
transformer_lens/HookedTransformerConfig.py
transformer_lens/loading_from_pretrained.py
transformer_lens/components/abstract_attention.py
```
You can then load this model by `HookedTransformer.from_pretrained_no_processing("sparse-tinystories")`
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755865419
|
helmutsukocok
| 2025-08-22T12:52:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:52:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rafcdln/noemie
|
rafcdln
| 2025-08-22T12:51:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-22T12:46:43Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1girl_2.jpg
text: '-'
- output:
url: images/1girl_3.jpg
text: '-'
- output:
url: images/1girl_1.jpg
text: '-'
base_model: Qwen/Qwen-Image
instance_prompt: n0em1e
license: apache-2.0
---
# Noemie LoRA
<Gallery />
## Model Description
**The most realistic character LoRA for Qwen.**
Built with a unique multi-layer training method—first capturing facial identity and body shape, then refined with high/low noise realism datasets—this LoRA delivers consistent faces and an added layer of striking realism.
👉 Works best when combined with a realism LoRA for maximum quality.
We’ll soon release the dataset, full training parameters, and workflows. The upcoming **“1girl” LoRA** will push realism on Qwen even further, focusing on highly photorealistic, Instagram-style characters.
🔗 **Join our Discord for LoRAs, workflows, and community support:** [https://discord.gg/qQJD2EFakz](https://discord.gg/qQJD2EFakz)
## Trigger Word
Use `n0em1e` to trigger the image generation.
## Download
[Download the model here](/rafcdln/noemie/tree/main) in the **Files & versions** tab.
|
casque/MitsuriKanroji-Pony
|
casque
| 2025-08-22T12:49:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-22T12:48:33Z |
---
license: creativeml-openrail-m
---
|
yuansui/qwen3-14b
|
yuansui
| 2025-08-22T12:49:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-14B-Base",
"base_model:finetune:Qwen/Qwen3-14B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T11:38:34Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-14B-Base
---
# Qwen3-14B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-14B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 14.8B
- Number of Paramaters (Non-Embedding): 13.2B
- Number of Layers: 40
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-14B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-14B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-14B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
roeker/blockassist-bc-quick_wiry_owl_1755866885
|
roeker
| 2025-08-22T12:48:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:48:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kellehod/ppo-LunarLander-v3
|
kellehod
| 2025-08-22T12:48:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-22T12:48:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 282.97 +/- 11.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
unitova/blockassist-bc-zealous_sneaky_raven_1755865193
|
unitova
| 2025-08-22T12:48:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:48:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
grorge123/whisper-tiny
|
grorge123
| 2025-08-22T12:44:03Z | 0 | 0 | null |
[
"whisper",
"region:us"
] | null | 2025-08-22T12:38:52Z |
Converted from https://huggingface.co/mlx-community/whisper-tiny-mlx/tree/main and https://github.com/ml-explore/mlx-examples/tree/main/whisper
|
roeker/blockassist-bc-quick_wiry_owl_1755866579
|
roeker
| 2025-08-22T12:43:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:43:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1755866502
|
8septiadi8
| 2025-08-22T12:43:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:43:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755864534
|
milliarderdol
| 2025-08-22T12:42:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:39:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755865040
|
aleebaster
| 2025-08-22T12:41:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-22T12:41:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AlexHung29629/large_spm_tokenizer
|
AlexHung29629
| 2025-08-22T12:41:38Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T12:41:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RaghavendraSqwish/qwen_sft_rank8_pruner-llama_999samples
|
RaghavendraSqwish
| 2025-08-22T12:41:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T12:40:39Z |
---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RaghavendraSqwish
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.