modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KGolden9/V3_Key3
|
KGolden9
| 2025-09-12T04:50:01Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-11T13:17:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
VoilaRaj/81_g_Xc3ECn
|
VoilaRaj
| 2025-09-12T04:49:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:48:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
2imi9/qwen3-1.7b-gptq-int4
|
2imi9
| 2025-09-12T04:48:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"quantization",
"gptq",
"int4",
"4bit",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-09-12T04:47:31Z |
---
language: [en, zh]
license: apache-2.0
library_name: transformers
base_model: Qwen/Qwen3-1.7B
tags: [quantization, gptq, int4, 4bit]
pipeline_tag: text-generation
quantization_config:
bits: 4
group_size: 16
damp_percent: 0.1
desc_act: false
static_groups: false
true_sequential: true
model_name_or_path: null
model_file_base_name: model
---
# Qwen3 1.7B GPTQ INT4
GPTQ 4-bit quantized version of Qwen/Qwen3-1.7B with group size 16.
## Model Details
- **Quantization**: GPTQ INT4 with group size 16
- **Size**: ~1GB (4x compression from original)
- **Format**: W4A16 (4-bit weights, 16-bit activations)
- **Compatible**: Native transformers library support
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"2imi9/qwen3-1.7b-gptq-int4",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("2imi9/qwen3-1.7b-gptq-int4")
# Generate text
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Gradio Demo
```python
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("2imi9/qwen3-1.7b-gptq-int4", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("2imi9/qwen3-1.7b-gptq-int4")
def chat(message, history):
inputs = tokenizer(message, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
return response
gr.ChatInterface(chat).launch()
```
Perfect for Gradio demos due to small size and fast inference.
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757652390
|
omerbektasss
| 2025-09-12T04:46:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:46:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Godfung/qwen-3-4B-content-moderation
|
Godfung
| 2025-09-12T04:46:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T04:46:28Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Godfung
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757652190
|
stonermay
| 2025-09-12T04:44:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:44:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Plimpumpam/puntocero
|
Plimpumpam
| 2025-09-12T04:44:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-12T04:42:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/5e380987-431e-42b8-8154-15b44120ad02.jpeg
text: '-'
- output:
url: images/61fcef43-3d3e-40d1-9a99-1ec448b31205.jpeg
text: '-'
- output:
url: images/f5af09fc-1ec1-4353-8467-9a59cae07ba2.jpeg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# puntocero
<Gallery />
## Download model
[Download](/Plimpumpam/puntocero/tree/main) them in the Files & versions tab.
|
nightmedia/WEBGEN-4B-Preview-qx86-hi-mlx
|
nightmedia
| 2025-09-12T04:42:36Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"web-generation",
"html",
"css",
"tailwind-css",
"ui-generation",
"web-design",
"small-model",
"transformers",
"conversational",
"en",
"base_model:Tesslate/WEBGEN-4B-Preview",
"base_model:quantized:Tesslate/WEBGEN-4B-Preview",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-12T04:29:51Z |
---
language:
- en
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
base_model: Tesslate/WEBGEN-4B-Preview
tags:
- web-generation
- html
- css
- tailwind-css
- ui-generation
- web-design
- small-model
- qwen3
- transformers
- mlx
---
# WEBGEN-4B-Preview-qx86-hi-mlx
This model [WEBGEN-4B-Preview-qx86-hi-mlx](https://huggingface.co/WEBGEN-4B-Preview-qx86-hi-mlx) was
converted to MLX format from [Tesslate/WEBGEN-4B-Preview](https://huggingface.co/Tesslate/WEBGEN-4B-Preview)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("WEBGEN-4B-Preview-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
camiellia/qwen2_5_vl_fiubench_checkpoint_3
|
camiellia
| 2025-09-12T04:40:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T20:05:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VoilaRaj/81_g_dkLs0l
|
VoilaRaj
| 2025-09-12T04:39:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:38:40Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
fixie-ai/ultravox-v0_6-qwen-3-32b
|
fixie-ai
| 2025-09-12T04:38:51Z | 1,235 | 7 |
transformers
|
[
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"audio-text-to-text",
"custom_code",
"ar",
"be",
"bg",
"bn",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"hi",
"hu",
"it",
"ja",
"ka",
"lt",
"lv",
"mk",
"mr",
"nl",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sr",
"sv",
"sw",
"ta",
"th",
"tr",
"uk",
"ur",
"vi",
"zh",
"license:mit",
"region:us"
] |
audio-text-to-text
| 2025-06-20T19:22:50Z |
---
language:
- ar
- be
- bg
- bn
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- hi
- hu
- it
- ja
- ka
- lt
- lv
- mk
- mr
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- sw
- ta
- th
- tr
- uk
- ur
- vi
- zh
license: mit
library_name: transformers
metrics:
- bleu
pipeline_tag: audio-text-to-text
---
# Model Card for Ultravox
Ultravox is a multimodal Speech LLM built around a pretrained LLM (Llama, Gemma, Qwen, etc) and a speech encoder ([whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo)) backbone.
See https://ultravox.ai for the GitHub repo and more information.
## Model Details
### Model Description
Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message).
The input to the model is given as a text prompt with a special `<|audio|>` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.
In v0.6 series, ultravox models are trained on expanded Hindi speech data, resulting in significantly improved speech understanding performance on Hindi and modest degradation on other languages. Additionally, the v0.6 models are also trained on noise datasets for improved noise robustness and the ability to output a special string ``((noise))`` if the input audio is too noisy or doesn't contain clear speech.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output.
No preference tuning has been applied to this revision of the model.
- **Developed by:** Fixie.ai
- **License:** MIT
### Model Sources
- **Repository:** https://ultravox.ai
- **Demo:** See repo
## Usage
Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc.
To use the model, try the following:
```python
# pip install transformers peft librosa
import transformers
import numpy as np
import librosa
pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_6-llama-3_1-8b', trust_remote_code=True)
path = "<path-to-input-audio>" # TODO: pass the audio here
audio, sr = librosa.load(path, sr=16000)
turns = [
{
"role": "system",
"content": "You are a friendly and helpful character. You love to answer questions for people."
},
]
pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30)
```
## Training Details
The model uses a pre-trained LLM (Llama, Gemma, Qwen, etc) backbone as well as the encoder part of [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo).
The multi-modal adapter is trained, the Whisper encoder is fine-tuned, and the LLM is kept frozen.
We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based LLM backbone.
### Training Data
The training dataset is a mix of ASR datasets, extended with continuations generated by Llama 3.1 8B, speech translation datasets, and noise datasets.
### Training Procedure
Supervised speech instruction finetuning via knowledge-distillation. For more info, see [training code in Ultravox repo](https://github.com/fixie-ai/ultravox/blob/main/ultravox/training/train.py).
#### Training Hyperparameters
- **Training regime:** BF16 mixed precision training
- **Hardware used:** 8x H100 GPUs
## Evaluation
Evaluations are conducted on covost2 (speech translation measured in BLEU), fleurs and ultravox_calls (speech recognition measured in WER), big bench audio (audio reasoning measured in accuracy), as well as musan and ultravox_unintelligible (noise/unintelligible speech detection measured in recall).
| | v0_5-llama-3_1-8b | v0_6-llama-3_1-8b | v0_5-llama-3_3-70b | v0_6-llama-3_3-70b | v0_6-gemma-3-27b | v0_6-qwen-3-32b |
| --- | ---: | --: | --: | --: | --: | --: |
| **covost2 en_ar** | 12.90 | 12.94 | 20.21 | 18.92 | 22.68 | 16.91 |
| **covost2 en_ca** | 31.51 | 31.47 | 40.01 | 38.73 | 39.67 | 33.63 |
| **covost2 en_de** | 28.60 | 28.66 | 34.53 | 33.69 | 34.76 | 31.09 |
| **covost2 es_en** | 40.41 | 40.36 | 43.29 | 41.39 | 41.11 | 41.20 |
| **covost2 ru_en** | 42.22 | 42.41 | 48.99 | 43.73 | 49.29 | 47.08 |
| **covost2 zh_en** | 16.97| 17.24 | 21.37 | 17.81 | 20.88 | 22.24 |
| **librispeech** | 2.04 | 2.09 | 2.09 | 2.55 | 2.73 | 2.88 |
| **fleurs cmn_hans_cn** | 12.11 | 12.25 | 11.20 | 13.49 | 12.56 | 12.10 |
| **fleurs de_de** | 6.66 | 7.56 | 5.26 | 7.14 | 4.86 | 6.83 |
| **fleurs es_419** | 5.74 | 5.83 | 4.53 | 6.06 | 4.68 | 5.14 |
| **fleurs hi_in** | 29.74 | 10.34 | 18.90 | 11.43 | 8.40 | 11.78 |
| **ultravox_calls (asr)** | 22.31 | 20.01 | 19.56 | 16.51 | 19.56 | 28.67 |
| **big bench audio**| 68.06 | 69.70 | 90.15 | 85.48 | 83.84 | 84.22 |
| **musan_noise** | 0.00 | 97.45 | 0.00 | 98.51 | 99.58 | 99.78 |
| **ultravox_unintelligible** | 0.00 | 45.78 | 0.00 | 50.00 | 66.84 | 64.21 |
|
fixie-ai/ultravox-v0_6-gemma-3-27b
|
fixie-ai
| 2025-09-12T04:38:40Z | 2,924 | 9 |
transformers
|
[
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"audio-text-to-text",
"custom_code",
"ar",
"be",
"bg",
"bn",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"hi",
"hu",
"it",
"ja",
"ka",
"lt",
"lv",
"mk",
"mr",
"nl",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sr",
"sv",
"sw",
"ta",
"th",
"tr",
"uk",
"ur",
"vi",
"zh",
"license:mit",
"region:us"
] |
audio-text-to-text
| 2025-06-20T16:30:57Z |
---
language:
- ar
- be
- bg
- bn
- cs
- cy
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- gl
- hi
- hu
- it
- ja
- ka
- lt
- lv
- mk
- mr
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- sw
- ta
- th
- tr
- uk
- ur
- vi
- zh
license: mit
library_name: transformers
metrics:
- bleu
pipeline_tag: audio-text-to-text
---
# Model Card for Ultravox
Ultravox is a multimodal Speech LLM built around a pretrained LLM (Llama, Gemma, Qwen, etc) and a speech encoder ([whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo)) backbone.
See https://ultravox.ai for the GitHub repo and more information.
## Model Details
### Model Description
Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message).
The input to the model is given as a text prompt with a special `<|audio|>` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.
In v0.6 series, ultravox models are trained on expanded Hindi speech data, resulting in significantly improved speech understanding performance on Hindi and modest degradation on other languages. Additionally, the v0.6 models are also trained on noise datasets for improved noise robustness and the ability to output a special string ``((noise))`` if the input audio is too noisy or doesn't contain clear speech.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output.
No preference tuning has been applied to this revision of the model.
- **Developed by:** Fixie.ai
- **License:** MIT
### Model Sources
- **Repository:** https://ultravox.ai
- **Demo:** See repo
## Usage
Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc.
To use the model, try the following:
```python
# pip install transformers peft librosa
import transformers
import numpy as np
import librosa
pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_6-llama-3_1-8b', trust_remote_code=True)
path = "<path-to-input-audio>" # TODO: pass the audio here
audio, sr = librosa.load(path, sr=16000)
turns = [
{
"role": "system",
"content": "You are a friendly and helpful character. You love to answer questions for people."
},
]
pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30)
```
## Training Details
The model uses a pre-trained LLM (Llama, Gemma, Qwen, etc) backbone as well as the encoder part of [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo).
The multi-modal adapter is trained, the Whisper encoder is fine-tuned, and the LLM is kept frozen.
We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based LLM backbone.
### Training Data
The training dataset is a mix of ASR datasets, extended with continuations generated by Llama 3.1 8B, speech translation datasets, and noise datasets.
### Training Procedure
Supervised speech instruction finetuning via knowledge-distillation. For more info, see [training code in Ultravox repo](https://github.com/fixie-ai/ultravox/blob/main/ultravox/training/train.py).
#### Training Hyperparameters
- **Training regime:** BF16 mixed precision training
- **Hardware used:** 8x H100 GPUs
## Evaluation
Evaluations are conducted on covost2 (speech translation measured in BLEU), fleurs and ultravox_calls (speech recognition measured in WER), big bench audio (audio reasoning measured in accuracy), as well as musan and ultravox_unintelligible (noise/unintelligible speech detection measured in recall).
| | v0_5-llama-3_1-8b | v0_6-llama-3_1-8b | v0_5-llama-3_3-70b | v0_6-llama-3_3-70b | v0_6-gemma-3-27b | v0_6-qwen-3-32b |
| --- | ---: | --: | --: | --: | --: | --: |
| **covost2 en_ar** | 12.90 | 12.94 | 20.21 | 18.92 | 22.68 | 16.91 |
| **covost2 en_ca** | 31.51 | 31.47 | 40.01 | 38.73 | 39.67 | 33.63 |
| **covost2 en_de** | 28.60 | 28.66 | 34.53 | 33.69 | 34.76 | 31.09 |
| **covost2 es_en** | 40.41 | 40.36 | 43.29 | 41.39 | 41.11 | 41.20 |
| **covost2 ru_en** | 42.22 | 42.41 | 48.99 | 43.73 | 49.29 | 47.08 |
| **covost2 zh_en** | 16.97| 17.24 | 21.37 | 17.81 | 20.88 | 22.24 |
| **librispeech** | 2.04 | 2.09 | 2.09 | 2.55 | 2.73 | 2.88 |
| **fleurs cmn_hans_cn** | 12.11 | 12.25 | 11.20 | 13.49 | 12.56 | 12.10 |
| **fleurs de_de** | 6.66 | 7.56 | 5.26 | 7.14 | 4.86 | 6.83 |
| **fleurs es_419** | 5.74 | 5.83 | 4.53 | 6.06 | 4.68 | 5.14 |
| **fleurs hi_in** | 29.74 | 10.34 | 18.90 | 11.43 | 8.40 | 11.78 |
| **ultravox_calls (asr)** | 22.31 | 20.01 | 19.56 | 16.51 | 19.56 | 28.67 |
| **big bench audio**| 68.06 | 69.70 | 90.15 | 85.48 | 83.84 | 84.22 |
| **musan_noise** | 0.00 | 97.45 | 0.00 | 98.51 | 99.58 | 99.78 |
| **ultravox_unintelligible** | 0.00 | 45.78 | 0.00 | 50.00 | 66.84 | 64.21 |
|
rashed233/rashedd
|
rashed233
| 2025-09-12T04:38:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T04:38:08Z |
---
license: apache-2.0
---
|
yangxw/Qwen3-8B-Dynamic-Syn
|
yangxw
| 2025-09-12T04:37:16Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T04:15:29Z |
---
license: apache-2.0
---
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757651573
|
stonermay
| 2025-09-12T04:34:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:33:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_g_k8W0Ql
|
VoilaRaj
| 2025-09-12T04:34:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:33:39Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757651506
|
omerbektasss
| 2025-09-12T04:32:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:32:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sunild7/blockassist
|
sunild7
| 2025-09-12T04:32:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage skilled beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:31:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage skilled beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DungND1107/grape-qlora-legal-adapter
|
DungND1107
| 2025-09-12T04:32:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:nqdhocai/LegalGemma-3-1b-it",
"base_model:finetune:nqdhocai/LegalGemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T18:08:58Z |
---
base_model: nqdhocai/LegalGemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DungND1107
- **License:** apache-2.0
- **Finetuned from model :** nqdhocai/LegalGemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
swardiantara/sentence-problem_type-embedding
|
swardiantara
| 2025-09-12T04:30:15Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-26T13:20:05Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# drone-problem-type
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('drone-problem-type')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=drone-problem-type)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7646 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2293,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
VoilaRaj/81_g_CrZZM8
|
VoilaRaj
| 2025-09-12T04:28:57Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:28:29Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kelvinzhaozg/diffusion_arx_dual_carpet_separation
|
kelvinzhaozg
| 2025-09-12T04:27:40Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:kelvinzhaozg/arx_dual_carpet_separation_lerobot",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-12T04:20:13Z |
---
datasets: kelvinzhaozg/arx_dual_carpet_separation_lerobot
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- lerobot
- diffusion
- robotics
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
deepdml/whisper-small-ig-mix
|
deepdml
| 2025-09-12T04:26:15Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"ig",
"dataset:google/fleurs",
"dataset:deepdml/igbo-dict-16khz",
"dataset:deepdml/igbo-dict-expansion-16khz",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-09-11T20:47:57Z |
---
language:
- ig
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- google/fleurs
- google/fleurs
- deepdml/igbo-dict-16khz
- deepdml/igbo-dict-expansion-16khz
metrics:
- wer
model-index:
- name: Whisper Small ig
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ig_ng
split: test
metrics:
- name: Wer
type: wer
value: 46.10372101384145
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ig
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5879
- Wer: 46.1037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1171 | 0.2 | 1000 | 1.2732 | 44.9937 |
| 0.028 | 1.0814 | 2000 | 1.4495 | 46.2251 |
| 0.0277 | 1.2814 | 3000 | 1.4894 | 45.3892 |
| 0.0084 | 2.1628 | 4000 | 1.5629 | 44.6881 |
| 0.0065 | 3.0442 | 5000 | 1.5879 | 46.1037 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
## Citation
```bibtex
@misc{deepdml/whisper-small-ig-mix,
title={Fine-tuned Whisper small ASR model for speech recognition in Igbo},
author={Jimenez, David},
howpublished={\url{https://huggingface.co/deepdml/whisper-small-ig-mix}},
year={2025}
}
```
|
VoilaRaj/81_g_w8S8Tz
|
VoilaRaj
| 2025-09-12T04:23:44Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:23:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Plimpumpam/doscero
|
Plimpumpam
| 2025-09-12T04:19:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-12T04:17:43Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Capture.PNG
text: '-'
- output:
url: images/Captursse.PNG
text: '-'
- output:
url: images/Captzzzzure.PNG
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: dz
---
# doscero
<Gallery />
## Trigger words
You should use `dz` to trigger the image generation.
## Download model
[Download](/Plimpumpam/doscero/tree/main) them in the Files & versions tab.
|
theguywhosucks/Mocha
|
theguywhosucks
| 2025-09-12T04:19:07Z | 0 | 0 | null |
[
"english",
"composition",
"sentance_completion",
"text-generation",
"en",
"dataset:theguywhosucks/mocha",
"license:other",
"region:us"
] |
text-generation
| 2025-09-12T04:02:08Z |
---
license: other
license_name: mocha
license_link: LICENSE
datasets:
- theguywhosucks/mocha
language:
- en
pipeline_tag: text-generation
tags:
- english
- composition
- sentance_completion
---
# Mocha β
Mocha is a **sentence completion model** designed for lightweight, fast, and accurate text generation.
Built with efficiency in mind, Mocha allows you to integrate natural language completion into your projects without the overhead of larger models.
<p align="center">
<img src="./banner.png" alt="Mocha Banner" width="600"/>
</p>
---
## π₯ Features
* β‘ **Lightweight** β optimized for speed and deployment.
* π **Sentence Completion** β generate contextually relevant endings for text prompts.
* π¦ **Safetensors Format** β stored in `.safetensors` for secure and efficient loading.
* πΌοΈ **Visual Identity** β comes with logo and banner assets for easy branding.
---
## π Project Structure
```
.
βββ mocha.safetensors # The model weights
βββ config.json
βββ generation_config.json
βββ special_tokens_map.json
βββ tokenizer.json
βββ logo.png # Project logo
βββ banner.png # Project banner
βββ README.md
```
---
## π Usage
You can load Mocha with [π€ Transformers](https://github.com/huggingface/transformers) and [safetensors](https://github.com/huggingface/safetensors):
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("theguywhosucks/Mocha")
model = AutoModelForCausalLM.from_pretrained(
"theguywhosucks/Mocha",
torch_dtype=torch.float16
)
# Example usage
prompt = "The future of AI is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## πΈ Assets
<p align="center">
<img src="./logo.png" alt="Mocha Logo" width="120"/>
</p>
---
## π οΈ Requirements
* Python 3.9+
* [Transformers](https://pypi.org/project/transformers/)
* [Safetensors](https://pypi.org/project/safetensors/)
* [PyTorch](https://pytorch.org/)
Install dependencies:
```bash
pip install torch transformers safetensors
```
---
## π License
This project is licensed under the **Mocha Proprietary License**.
Usage, distribution, and modification are restricted. Please see the [LICENSE](./LICENSE) file for full details.
---
β **Mocha** β lightweight sentence completion made simple.
|
VoilaRaj/81_g_cMfj1z
|
VoilaRaj
| 2025-09-12T04:18:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:18:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757650628
|
omerbektasss
| 2025-09-12T04:18:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:17:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Cristhian2430/whisper-large-coes-v10
|
Cristhian2430
| 2025-09-12T04:13:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"es",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-11T18:53:53Z |
---
library_name: transformers
language:
- es
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large SEIN - COES SEIN - Version 10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large SEIN - COES SEIN - Version 10
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the SEIN COES dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9839
- Wer: 58.2114
- Num Input Tokens Seen: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.57.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757650340
|
stonermay
| 2025-09-12T04:13:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:13:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
duongve/Loras_Diffusion_model
|
duongve
| 2025-09-12T04:13:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-30T04:03:27Z |
---
license: apache-2.0
---
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757650242
|
omerbektasss
| 2025-09-12T04:11:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:11:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
themagicofbtc/blockassist
|
themagicofbtc
| 2025-09-12T04:10:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy scented dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T20:01:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy scented dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uwcc/cartoonDoodle
|
uwcc
| 2025-09-12T04:10:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-12T04:09:36Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: woman with red hair, playing chess at the park, bomb going off in the background
output:
url: samples/1757650009832__000004500_0.jpg
- text: a woman holding a coffee cup, in a beanie, sitting at a cafe
output:
url: samples/1757650027925__000004500_1.jpg
- text: a horse is a DJ at a night club, fish eye lens, smoke machine, lazer lights,
holding a martini
output:
url: samples/1757650045911__000004500_2.jpg
- text: a man showing off his cool new t shirt at the beach, a shark is jumping
out of the water in the background
output:
url: samples/1757650063980__000004500_3.jpg
- text: a bear building a log cabin in the snow covered mountains
output:
url: samples/1757650081965__000004500_4.jpg
- text: woman playing the guitar, on stage, singing a song, laser lights, punk rocker
output:
url: samples/1757650100047__000004500_5.jpg
- text: hipster man with a beard, building a chair, in a wood shop
output:
url: samples/1757650118029__000004500_6.jpg
- text: photo of a man, white background, medium shot, modeling clothing, studio
lighting, white backdrop
output:
url: samples/1757650136093__000004500_7.jpg
- text: a man holding a sign that says, 'this is a sign'
output:
url: samples/1757650154082__000004500_8.jpg
- text: a bulldog, in a post apocalyptic world, with a shotgun, in a leather jacket,
in a desert, with a motorcycle
output:
url: samples/1757650172148__000004500_9.jpg
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# cartoonDoodle
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/uwcc/cartoonDoodle/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('uwcc/cartoonDoodle', weight_name='cartoonDoodle.safetensors')
image = pipeline('woman with red hair, playing chess at the park, bomb going off in the background').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
VoilaRaj/81_g_i66qWe
|
VoilaRaj
| 2025-09-12T04:08:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:08:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bx5974/model
|
bx5974
| 2025-09-12T04:07:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:06:58Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** bx5974
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
judsfdf/USABLE_3_libre
|
judsfdf
| 2025-09-12T04:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T04:03:17Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** judsfdf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VoilaRaj/81_g_IOoF0h
|
VoilaRaj
| 2025-09-12T04:03:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T04:03:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757649724
|
stonermay
| 2025-09-12T04:03:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T04:03:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
flockingalpha/task-14-microsoft-Phi-4-mini-instruct
|
flockingalpha
| 2025-09-12T04:00:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-09-12T02:44:32Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF
|
ThomasTheMaker
| 2025-09-12T03:59:48Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ThomasTheMaker/SmolLM2-135M-Tulu-SFT",
"base_model:quantized:ThomasTheMaker/SmolLM2-135M-Tulu-SFT",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T03:59:44Z |
---
base_model: ThomasTheMaker/SmolLM2-135M-Tulu-SFT
tags:
- llama-cpp
- gguf-my-repo
---
# ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF
This model was converted to GGUF format from [`ThomasTheMaker/SmolLM2-135M-Tulu-SFT`](https://huggingface.co/ThomasTheMaker/SmolLM2-135M-Tulu-SFT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ThomasTheMaker/SmolLM2-135M-Tulu-SFT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF --hf-file smollm2-135m-tulu-sft-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF --hf-file smollm2-135m-tulu-sft-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF --hf-file smollm2-135m-tulu-sft-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ThomasTheMaker/SmolLM2-135M-Tulu-SFT-Q8_0-GGUF --hf-file smollm2-135m-tulu-sft-q8_0.gguf -c 2048
```
|
gajahgajah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_armored_wildebeest
|
gajahgajah
| 2025-09-12T03:59:47Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fanged_armored_wildebeest",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T17:47:01Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fanged_armored_wildebeest
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757649473
|
omerbektasss
| 2025-09-12T03:58:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:58:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adeelahmad/ReasonableQwen3-4B
|
adeelahmad
| 2025-09-12T03:57:08Z | 3,254 | 2 |
mlx
|
[
"mlx",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"doi:10.57967/hf/6375",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T03:38:27Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B
---
# ReasonableQwen3-4B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-4B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest versions of both **`transformers` (β₯β―4.52.4)** and **`mlx_lm` (β₯β―0.25.2)**, and we advise you to use the latest version of `transformers` and `mlx_lm`.
Older versions (e.g., `transformers<4.51.0`) may raise errors like:
```text
KeyError: 'qwen3'
```
Install or upgrade both packages:
```bash
pip install --upgrade transformers mlx_lm
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from mlx_lm import load, generate
model, tokenizer = load("adeelahmad/ReasonableQwen3-4B")
prompt = "Hello, please introduce yourself and tell me what you can do."
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True
)
response = generate(
model,
tokenizer,
prompt=prompt,
verbose=True,
max_tokens=1024
)
print(response)
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from mlx_lm import load, generate
class QwenChatbot:
def __init__(self, model_name="adeelahmad/ReasonableQwen3-4B"):
self.model, self.tokenizer = load(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
response = generate(
self.model,
self.tokenizer,
prompt=text,
verbose=True,
max_tokens=32768
)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many 'r's are in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many 'r's are in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
"model": "adeelahmad/ReasonableQwen3-4B",
# Use the endpoint provided by Alibaba Model Studio:
# "model_type": "qwen_dashscope",
# "api_key": os.getenv("DASHSCOPE_API_KEY"),
# Use a custom endpoint compatible with OpenAI API:
"model_server": "http://localhost:8000/v1", # api_base
"api_key": "EMPTY",
# Other parameters:
# "generate_cfg": {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# "thought_in_content": True,
# },
}
# Define Tools
tools = [
{
"mcpServers": { # You can specify the MCP configuration file
"time": {
"command": "uvx",
"args": ["mcp-server-time", "--local-timezone=Asia/Shanghai"]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
"code_interpreter", # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [
{
"role": "user",
"content": "https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen"
}
]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
nbirukov/act_pick_up_3c_34
|
nbirukov
| 2025-09-12T03:55:23Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:nbirukov/pick_up_3c",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-12T03:54:43Z |
---
datasets: nbirukov/pick_up_3c
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- act
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757649108
|
stonermay
| 2025-09-12T03:54:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:52:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_g_XdcMCF
|
VoilaRaj
| 2025-09-12T03:53:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:53:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757649112
|
omerbektasss
| 2025-09-12T03:52:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:52:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Table-R1-Zero-8B-GGUF
|
mradermacher
| 2025-09-12T03:50:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Table-R1/Table-R1-Zero-8B",
"base_model:quantized:Table-R1/Table-R1-Zero-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T02:47:46Z |
---
base_model: Table-R1/Table-R1-Zero-8B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Table-R1/Table-R1-Zero-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Table-R1-Zero-8B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Table-R1-Zero-8B-GGUF/resolve/main/Table-R1-Zero-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
snegha24/q-FrozenLake-v1-4x4-noSlippery
|
snegha24
| 2025-09-12T03:49:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-12T03:49:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="snegha24/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VoilaRaj/81_g_egWmOL
|
VoilaRaj
| 2025-09-12T03:48:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:48:15Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/GRPO-MINT-1B-GGUF
|
mradermacher
| 2025-09-12T03:47:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:evoreign/GRPO-MINT-1B",
"base_model:quantized:evoreign/GRPO-MINT-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T03:23:10Z |
---
base_model: evoreign/GRPO-MINT-1B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/evoreign/GRPO-MINT-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GRPO-MINT-1B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GRPO-MINT-1B-GGUF/resolve/main/GRPO-MINT-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SivatejaBoddu/opt-qlora-adapter
|
SivatejaBoddu
| 2025-09-12T03:46:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T03:46:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
puneetpanwar/smolvla_all_cube_picking
|
puneetpanwar
| 2025-09-12T03:46:22Z | 8 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:puneetpanwar/all_cube_picking",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-07T21:46:14Z |
---
base_model: lerobot/smolvla_base
datasets: puneetpanwar/all_cube_picking
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
zhaoce/xlm-roberta-ner-ja-v5
|
zhaoce
| 2025-09-12T03:46:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-09-12T03:28:33Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: xlm-roberta-ner-ja-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-ner-ja-v5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0551
- Precision: 0.9044
- Recall: 0.9638
- F1-score: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|
| 0.0934 | 1.0 | 841 | 0.0589 | 0.8482 | 0.9710 | 0.9054 |
| 0.0411 | 2.0 | 1682 | 0.0438 | 0.9238 | 0.9920 | 0.9567 |
| 0.0269 | 3.0 | 2523 | 0.0428 | 0.9023 | 0.9616 | 0.9310 |
| 0.0174 | 4.0 | 3364 | 0.0493 | 0.9011 | 0.9647 | 0.9318 |
| 0.0112 | 5.0 | 4205 | 0.0551 | 0.9044 | 0.9638 | 0.9332 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.2.2+cu121
- Datasets 4.0.0
- Tokenizers 0.22.0
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757648719
|
omerbektasss
| 2025-09-12T03:46:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:45:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abdoosh1000/flan-t5-autonomous-workspace
|
abdoosh1000
| 2025-09-12T03:44:15Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-02T04:42:37Z |
# FLAN-T5 Autonomous Training Workspace
This is a unified repository for autonomous FLAN-T5 model training operations.
## Structure
- `tracking/` - Training state and progress tracking files
- `models/` - Trained model checkpoints and metadata
- `datasets/` - Dataset processing state and chunk information
- `logs/` - Training logs and metrics
## Latest Status
Last updated: 2025-09-11T16:03:58.124221
Workspace created by: Autonomous FLAN-T5 Trainer
## Usage
This repository is automatically managed by the autonomous training system.
All training progress, model states, and dataset processing information is tracked here.
|
VoilaRaj/81_g_lertTm
|
VoilaRaj
| 2025-09-12T03:43:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:43:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757648492
|
stonermay
| 2025-09-12T03:42:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:42:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DungND1107/grape-qlora-legal-adapter-step-15000
|
DungND1107
| 2025-09-12T03:40:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:nqdhocai/LegalGemma-3-1b-it",
"base_model:finetune:nqdhocai/LegalGemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-05T16:46:15Z |
---
base_model: nqdhocai/LegalGemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DungND1107
- **License:** apache-2.0
- **Finetuned from model :** nqdhocai/LegalGemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
0xGareeb/blockassist
|
0xGareeb
| 2025-09-12T03:39:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T03:02:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/Qwen3-0.6B-bnpo6-f16-300
|
moyixiao
| 2025-09-12T03:39:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T03:39:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VoilaRaj/81_g_tFprR0
|
VoilaRaj
| 2025-09-12T03:38:55Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:38:27Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
VoilaRaj/81_g_oKMHeG
|
VoilaRaj
| 2025-09-12T03:34:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:33:39Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757647997
|
omerbektasss
| 2025-09-12T03:33:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:33:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dongqn69/blockassist
|
dongqn69
| 2025-09-12T03:33:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous waddling rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T18:19:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous waddling rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
camiellia/qwen2_5_vl_fiubench_checkpoint_0
|
camiellia
| 2025-09-12T03:32:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T19:16:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fanwu103/distilgpt2-finetuned-wikitext2
|
fanwu103
| 2025-09-12T03:30:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T03:28:54Z |
---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 292 | 3.8125 |
| 0.4959 | 2.0 | 584 | 3.7695 |
| 0.4959 | 3.0 | 876 | 3.7572 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.6.0+git45896ac
- Datasets 4.0.0
- Tokenizers 0.22.0
|
dinhhung1508/ViModernBERT2
|
dinhhung1508
| 2025-09-12T03:28:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:clapAI/modernBERT-base-multilingual-sentiment",
"base_model:finetune:clapAI/modernBERT-base-multilingual-sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-12T03:27:51Z |
---
base_model: clapAI/modernBERT-base-multilingual-sentiment
tags:
- text-generation-inference
- transformers
- unsloth
- modernbert
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dinhhung1508
- **License:** apache-2.0
- **Finetuned from model :** clapAI/modernBERT-base-multilingual-sentiment
This modernbert model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbektasss/blockassist-bc-insectivorous_bold_lion_1757647646
|
omerbektasss
| 2025-09-12T03:27:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:27:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
teysty/vjepa2-vitl-fpc16-256-ssv2-fdet_64-frames_1clip_1indice_cleaned-new-split_20pochs
|
teysty
| 2025-09-12T03:27:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vjepa2",
"video-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-09-12T03:26:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cyborg-AI/openai_oss_20b_evo
|
Cyborg-AI
| 2025-09-12T03:24:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T03:03:13Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Cyborg-AI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
codefactory4791/Qwen3-0.6B-SFT-20250912030303
|
codefactory4791
| 2025-09-12T03:20:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"hf_jobs",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T03:03:45Z |
---
base_model: Qwen/Qwen3-0.6B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-0.6B-SFT-20250912030303
tags:
- generated_from_trainer
- trl
- hf_jobs
- sft
licence: license
---
# Model Card for Qwen3-0.6B-SFT-20250912030303
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="codefactory4791/Qwen3-0.6B-SFT-20250912030303", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
felixmayor/gr00t_orange_cube
|
felixmayor
| 2025-09-12T03:20:04Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1_5",
"robotics",
"gr00t",
"manipulation",
"so101",
"en",
"region:us"
] |
robotics
| 2025-09-12T03:13:39Z |
---
tags:
- robotics
- gr00t
- manipulation
- so101
language:
- en
pipeline_tag: robotics
---
# GR00T Orange Cube Manipulation Model
Fine-tuned NVIDIA GR00T N1.5 model for orange cube pick-and-place tasks using dual-camera SO-101 robot.
## Model Details
- **Base Model**: nvidia/GR00T-N1.5-3B
- **Training Steps**: 10,000
- **Dataset**: 154 episodes, 68,468 frames
- **Cameras**: Dual setup (fpv + top) resized to 224x224
- **Action Space**: 6D
|
jinx2321/byt5-all-araea-1e4-je-4
|
jinx2321
| 2025-09-12T03:20:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:12:56Z |
---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-all-araea-1e4-je-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-all-araea-1e4-je-4
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
geminiWhale/dummy-model
|
geminiWhale
| 2025-09-12T03:18:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-12T03:18:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
klrq/Qwen2_5_VL_7B_SFT
|
klrq
| 2025-09-12T03:16:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-11T06:13:10Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: Qwen2_5_VL_7B_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2_5_VL_7B_SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 50
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.7.1+cu118
- Datasets 4.0.0
- Tokenizers 0.22.0
|
kshitijthakkar/loggenix-moe-0.3B-A0.1B-e3-lr7e5-b16-4090-v7-sft-v1
|
kshitijthakkar
| 2025-09-12T03:14:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T03:14:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VoilaRaj/81_g_5m59L8
|
VoilaRaj
| 2025-09-12T03:14:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:14:15Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
jinx2321/byt5-all-araea-1e4-ko-4
|
jinx2321
| 2025-09-12T03:14:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T22:12:56Z |
---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-all-araea-1e4-ko-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-all-araea-1e4-ko-4
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Jaehun/lpt2-dpo-130k-sft-247k
|
Jaehun
| 2025-09-12T03:13:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-12T03:07:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ViFortune-AI/Qwen3-VL-1B-Merged
|
ViFortune-AI
| 2025-09-12T03:13:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"vision-language",
"qwen3",
"qwen2.5-vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-12T03:06:08Z |
---
license_name: vifortune-research
license_link: https://huggingface.co/Qwen/Qwen3-VL-1B/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- vision-language
- qwen3
- qwen2.5-vl
library_name: transformers
---
# Qwen3-VL (Merged Model)
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%20Qwen3-VL%20Chat-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
This repository contains a **merged multimodal model** that combines:
* The **language backbone from Qwen3**, which improves reasoning, alignment, and long-context language understanding.
* The **visual encoder branch from Qwen2.5-VL-3B-Instruct**, which provides strong perception ability for images, documents, charts, and videos.
By merging the strengths of Qwen3 and Qwen2.5-VL, this model inherits both **advanced LLM reasoning** and **robust multimodal perception**.
---
## Key Features
* **Stronger language reasoning** from Qwen3.
* **Visual understanding**: capable of analyzing images, OCR texts, layouts, charts, and UI screenshots.
* **Video comprehension**: can process long videos, capture events, and align with temporal sequences.
* **Structured outputs**: supports generating JSON-style results for tasks like tables, forms, and invoices.
* **Agentic ability**: can act as a visual agent, suitable for tool use, screen interaction, and embodied AI.
---
## Model Architecture
* **Backbone**: Qwen3 LLM.
* **Vision Encoder**: Qwen2.5-VL-3B visual branch with mRoPE extension for temporal alignment.
* **Multimodal Fusion**: Cross-attention layers align vision and language representations.
* **Context Length**: up to 32k tokens (YaRN extrapolation possible, see below).
---
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoProcessor
from qwen_vl_utils import process_vision_info
# Load the merged model
model = AutoModelForCausalLM.from_pretrained(
"YOUR_REPO_NAME/Qwen3-VL-Merged",
torch_dtype="auto",
device_map="auto"
)
# Processor (tokenizer + image/video preprocessing)
processor = AutoProcessor.from_pretrained("YOUR_REPO_NAME/Qwen3-VL-Merged")
# Example: Image + Text prompt
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/example.jpg"},
{"type": "text", "text": "Describe this image in detail."},
],
}
]
# Prepare inputs
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt"
).to("cuda")
# Generate
outputs = model.generate(**inputs, max_new_tokens=128)
print(processor.batch_decode(outputs, skip_special_tokens=True))
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a local video path and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video url and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4",
},
{"type": "text", "text": "Describe this video."},
],
}
]
#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
fps=fps,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
Video URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
| Backend | HTTP | HTTPS |
|-------------|------|-------|
| torchvision >= 0.19.0 | β
| β
|
| torchvision < 0.19.0 | β | β |
| decord | β
| β |
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages2]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### π€ ModelScope
We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```
{
...,
"type": "yarn",
"mrope_section": [
16,
24,
24
],
"factor": 4,
"original_max_position_embeddings": 32768
}
```
However, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.
At the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5-VL,
title = {Qwen2.5-VL},
url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
author = {Qwen Team},
month = {January},
year = {2025}
}
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757646644
|
stonermay
| 2025-09-12T03:12:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:11:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmedheakl/pts-lora
|
ahmedheakl
| 2025-09-12T03:10:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T03:08:00Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: llama3b-pixart-4bs-2grad-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama3b-pixart-4bs-2grad-lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ahmed-heakl/huggingface/runs/q6prjraf)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.52.0
- Pytorch: 2.7.0+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
omerbektasss/blockassist-bc-keen_fast_giraffe_1757646548
|
omerbektasss
| 2025-09-12T03:10:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:09:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_g_RPYu58
|
VoilaRaj
| 2025-09-12T03:09:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-12T03:09:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
e12morgan/Taxi-v3
|
e12morgan
| 2025-09-12T03:07:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-12T03:07:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="e12morgan/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vertigoq3/email-classifier-bert
|
vertigoq3
| 2025-09-12T03:07:38Z | 0 | 1 | null |
[
"safetensors",
"bert",
"text-classification",
"spanish",
"email-classification",
"multilingual",
"es",
"dataset:custom",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-12T00:34:28Z |
---
license: mit
language: es
tags:
- text-classification
- spanish
- email-classification
- bert
- multilingual
datasets:
- custom
metrics:
- accuracy
- f1
model-index:
- name: vertigoq3/email-classifier-bert
results:
- task:
type: text-classification
name: Email Classification
dataset:
type: custom
name: Email Dataset
metrics:
- type: accuracy
value: 0.0
- type: f1
value: 0.0
---
# email-classifier-bert
Modelo BERT multilingΓΌe fine-tuneado para clasificaciΓ³n de emails en espaΓ±ol.
## DescripciΓ³n
Este modelo estΓ‘ basado en `bert-base-multilingual-cased` y ha sido entrenado para clasificar emails en diferentes categorΓas. El modelo puede identificar automΓ‘ticamente el tipo de email basΓ‘ndose en su contenido.
## Uso
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
import pickle
# Cargar el modelo y tokenizer
model = AutoModelForSequenceClassification.from_pretrained("vertigoq3/email-classifier-bert")
tokenizer = AutoTokenizer.from_pretrained("vertigoq3/email-classifier-bert")
# Cargar el encoder de etiquetas
with open("label_encoder.pkl", "rb") as f:
encoder = pickle.load(f)
def clasificar_email(texto):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = tokenizer(texto, return_tensors="pt", truncation=True, padding=True, max_length=512)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
pred = np.argmax(outputs.logits.detach().cpu().numpy(), axis=1)
return encoder.inverse_transform(pred)[0]
# Ejemplo de uso
resultado = clasificar_email("ΒΏCuΓ‘ndo abren maΓ±ana?")
print(f"CategorΓa: {resultado}")
```
## InstalaciΓ³n
```bash
pip install transformers torch numpy scikit-learn
```
## Entrenamiento
El modelo fue entrenado con:
- **Base Model**: bert-base-multilingual-cased
- **Epochs**: 6
- **Learning Rate**: 2e-5
- **Batch Size**: 8
- **Weight Decay**: 0.01
## Limitaciones
- El modelo estΓ‘ optimizado para texto en espaΓ±ol
- Requiere el archivo `label_encoder.pkl` para funcionar correctamente
- Las categorΓas de clasificaciΓ³n dependen del dataset de entrenamiento
## Contacto
Para preguntas o problemas, contacta al autor del modelo.
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct_bd_cs_t2sws-t2s_r32_a32_e2_bs2_gas4_lr0.0001_fs0f_cvdt_sftreason
|
alpcaferoglu
| 2025-09-12T03:07:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T02:34:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
second-state/embeddinggemma-300m-GGUF
|
second-state
| 2025-09-12T03:06:34Z | 1,412 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"gemma3_text",
"sentence-similarity",
"base_model:google/embeddinggemma-300m",
"base_model:quantized:google/embeddinggemma-300m",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-06T08:33:46Z |
---
license: gemma
pipeline_tag: sentence-similarity
library_name: sentence-transformers
base_model: google/embeddinggemma-300m
model_creator: google
model_name: embeddinggemma-300m
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# embeddinggemma-300m-Embedding-GGUF
## Original Model
[google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m)
## Run with LlamaEdge
- LlamaEdge version: [v0.26.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.26.1) and above
- Prompt template
- Prompt type: `embedding`
- Context size: `2048`
- Embedding size: `128, 256, 512, 768`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:embeddinggemma-300m-f16.gguf \
llama-api-server.wasm \
--prompt-template embedding \
--ctx-size 768 \
--model-name embeddinggemma-300m
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [embeddinggemma-300m-Q2_K.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q2_K.gguf) | Q2_K | 2 | 212 MB| smallest, significant quality loss - not recommended for most purposes |
| [embeddinggemma-300m-Q3_K_L.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q3_K_L.gguf) | Q3_K_L | 3 | 227 MB| small, substantial quality loss |
| [embeddinggemma-300m-Q3_K_M.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q3_K_M.gguf) | Q3_K_M | 3 | 224 MB| very small, high quality loss |
| [embeddinggemma-300m-Q3_K_S.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q3_K_S.gguf) | Q3_K_S | 3 | 218 MB| very small, high quality loss |
| [embeddinggemma-300m-Q4_0.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q4_0.gguf) | Q4_0 | 4 | 229 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [embeddinggemma-300m-Q4_K_M.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q4_K_M.gguf) | Q4_K_M | 4 | 236 MB| medium, balanced quality - recommended |
| [embeddinggemma-300m-Q4_K_S.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q4_K_S.gguf) | Q4_K_S | 4 | 232 MB| small, greater quality loss |
| [embeddinggemma-300m-Q5_0.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q5_0.gguf) | Q5_0 | 5 | 242 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [embeddinggemma-300m-Q5_K_M.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q5_K_M.gguf) | Q5_K_M | 5 | 247 MB| large, very low quality loss - recommended |
| [embeddinggemma-300m-Q5_K_S.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q5_K_S.gguf) | Q5_K_S | 5 | 243 MB| large, low quality loss - recommended |
| [embeddinggemma-300m-Q6_K.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q6_K.gguf) | Q6_K | 6 | 260 MB| very large, extremely low quality loss |
| [embeddinggemma-300m-Q8_0.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-Q8_0.gguf) | Q8_0 | 8 | 329 MB| very large, extremely low quality loss - not recommended |
| [embeddinggemma-300m-f16.gguf](https://huggingface.co/second-state/embeddinggemma-300m-Embedding-GGUF/blob/main/embeddinggemma-300m-f16.gguf) | f16 | 16 | 616 MB| very large, extremely low quality loss - not recommended |
*Quantized with llama.cpp b6397*
|
luckeciano/Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8791
|
luckeciano
| 2025-09-12T03:04:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T22:25:42Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8791
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8791
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8791", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/25zzc7lo)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lagoscity/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-woolly_striped_albatross
|
lagoscity
| 2025-09-12T03:02:23Z | 194 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am woolly_striped_albatross",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T20:32:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am woolly_striped_albatross
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhepoch/test1
|
zhepoch
| 2025-09-12T03:02:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T03:02:11Z |
---
license: apache-2.0
---
|
drbaph/HunyuanImage-2.1_fp8
|
drbaph
| 2025-09-12T03:02:03Z | 0 | 13 |
HunyuanImage-2.1
|
[
"HunyuanImage-2.1",
"text-to-image",
"comfyui",
"diffusers",
"en",
"zh",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-09T22:38:14Z |
---
library_name: HunyuanImage-2.1
license: other
license_name: tencent-hunyuan-community
license_link: https://github.com/Tencent-Hunyuan/HunyuanImage-2.1/blob/master/LICENSE
language:
- en
- zh
tags:
- text-to-image
- comfyui
- diffusers
pipeline_tag: text-to-image
extra_gated_eu_disallowed: true
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63473b59e5c0717e6737b872/5DZez8C7TeFwRn3FcKDix.png" alt="HunyuanImage-2.1 Banner" />
<h1> HunyuanImage-2.1 fp8 e4m3fn </h1>
<h2>An Efficient Diffusion Model for High-Resolution (2K) Text-to-Image Generation</h2>
</div>
</div>
<div align="center">
<a href="https://github.com/Tencent-Hunyuan/HunyuanImage-2.1" target="_blank"><img src="https://img.shields.io/badge/Code-black.svg?logo=github" height="22px"></a>
<a href="https://huggingface.co/spaces/tencent/HunyuanImage-2.1" target="_blank">
<img src="https://img.shields.io/badge/Demo%20Page-blue" height="22px"></a>
<a href="https://huggingface.co/tencent/HunyuanImage-2.1" target="_blank"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg" height="22px"></a>
<a href="#" target="_blank"><img src="https://img.shields.io/badge/Report-Coming%20Soon-blue" height="22px"></a>
<a href="https://hunyuan-promptenhancer.github.io/" target="_blank"><img src="https://img.shields.io/badge/PromptEnhancer-bb8a2e.svg?logo=github" height="22px"></a>
<a href="https://x.com/TencentHunyuan" target="_blank"><img src="https://img.shields.io/badge/Hunyuan-black.svg?logo=x" height="22px"></a>
</div>
---
## **Performance on RTX 5090**
> When using **HunyuanImage-2.1** with the **quantized encoder** + **quantized base model**,
> the VRAM usage on an **NVIDIA RTX 5090** typically ranges between **26 GB and 30 GB** with average
> 16 second inference time depending on resolution, batch size, and prompt complexity.
> **Reports that it works on 16gb VRAM GPU's**
β **Important Note:**
The **refiner** is still not implemented and is **not ready for use in ComfyUI**.
However, the **distilled model now works in ComfyUI** with recommended settings of **8 steps / 1.5-2.5 CFG**.
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63473b59e5c0717e6737b872/auZ_xmiKPw0QdBYUrTLn-.png" alt="Image1"/>
</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63473b59e5c0717e6737b872/qod1zCPWjzOZSNcOWx49-.png" alt="Image2"/>
</p>


---
## **Download Quantized Model (FP8 e4m3fn)**
[**Download hunyuanimage2.1_fp8_e4m3fn.safetensors**](https://huggingface.co/drbaph/HunyuanImage-2.1_fp8/blob/main/hunyuanimage2.1_fp8_e4m3fn.safetensors)
---
### **Workflow Notes**
- **Model:** HunyuanImage-2.1
- **Mode:** Quantized Encoder + Quantized Base Model
- **VRAM Usage:** ~26GBβ30GB on RTX 5090
- **Resolution Tested:** 2K (2048Γ2048)
- **Frameworks:** ComfyUI & Diffusers
- **Optimisations** Works with Patch Sage Attention + Lazycache / TeaCache β
- **Distilled Model:** β
Now works in ComfyUI with **8 steps / 1.5-2.5 CFG**
- **Refiner:** β Still not implemented, **not available in ComfyUI**
- **License:** [tencent-hunyuan-community](https://github.com/Tencent-Hunyuan/HunyuanImage-2.1/blob/master/LICENSE)
---
<p align="center">
π **Optimized for High-Resolution, Memory-Efficient Text-to-Image Generation**
</p>
|
stonermay/blockassist-bc-diving_lightfooted_caterpillar_1757646028
|
stonermay
| 2025-09-12T03:01:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T03:01:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiny-random/qwen3-next-moe
|
tiny-random
| 2025-09-12T03:00:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:finetune:Qwen/Qwen3-Next-80B-A3B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T02:58:33Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- Qwen/Qwen3-Next-80B-A3B-Instruct
---
This tiny model is intended for debugging. It is randomly initialized using the configuration adapted from [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct).
### Example usage:
- vLLM
```bash
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 \
vllm serve tiny-random/qwen3-next-moe \
--tensor-parallel-size 4 \
--max-model-len 262144 \
--speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
```
- SGLang
```bash
SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 \
python -m sglang.launch_server \
--model-path tiny-random/qwen3-next-moe \
--tp-size 4 --context-length 262144 \
--mem-fraction-static 0.8 \
--speculative-algo NEXTN \
--speculative-num-steps 3 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 4
```
- Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "tiny-random/qwen3-next-moe"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
dtype="auto",
device_map="cuda",
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8,
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
### Codes to create this repo:
```python
from copy import deepcopy
import torch
import torch.nn as nn
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
pipeline,
set_seed,
)
source_model_id = "Qwen/Qwen3-Next-80B-A3B-Instruct"
save_folder = "/tmp/tiny-random/qwen3-next-moe"
tokenizer = AutoTokenizer.from_pretrained(
source_model_id, trust_remote_code=True,
)
tokenizer.save_pretrained(save_folder)
config = AutoConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
config._name_or_path = source_model_id
config.hidden_size = 8
config.intermediate_size = 32
config.head_dim = 32
config.num_key_value_heads = 8
config.num_attention_heads = 16
config.num_hidden_layers = 4
config.tie_word_embeddings = False
config.linear_num_key_heads = 8
config.linear_num_value_heads = 16
config.moe_intermediate_size = 32
config.num_experts = 32
config.num_experts_per_tok = 10
config.layer_types = config.layer_types[:4]
config.shared_expert_intermediate_size = 32
model = AutoModelForCausalLM.from_config(
config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
# MTP
model.mtp = nn.ModuleDict({
"pre_fc_norm_embedding": nn.RMSNorm(config.hidden_size),
"fc": nn.Linear(config.hidden_size * 2, config.hidden_size, bias=False),
"norm": nn.RMSNorm(config.hidden_size),
"pre_fc_norm_hidden": nn.RMSNorm(config.hidden_size),
"layers": nn.ModuleList([deepcopy(model.model.layers[3])]),
})
model = model.to(torch.bfloat16)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
```
### Printing the model:
```text
Qwen3NextForCausalLM(
(model): Qwen3NextModel(
(embed_tokens): Embedding(151936, 8)
(layers): ModuleList(
(0-2): 3 x Qwen3NextDecoderLayer(
(linear_attn): Qwen3NextGatedDeltaNet(
(act): SiLU()
(conv1d): Conv1d(4096, 4096, kernel_size=(4,), stride=(1,), padding=(3,), groups=4096, bias=False)
(in_proj_qkvz): Linear(in_features=8, out_features=6144, bias=False)
(in_proj_ba): Linear(in_features=8, out_features=32, bias=False)
(norm): FusedRMSNormGated(128, eps=1e-06, activation=silu)
(out_proj): Linear(in_features=2048, out_features=8, bias=False)
)
(mlp): Qwen3NextSparseMoeBlock(
(gate): Linear(in_features=8, out_features=32, bias=False)
(experts): ModuleList(
(0-31): 32 x Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
)
(shared_expert): Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
(shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
)
(input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
(post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
)
(3): Qwen3NextDecoderLayer(
(self_attn): Qwen3NextAttention(
(q_proj): Linear(in_features=8, out_features=1024, bias=False)
(k_proj): Linear(in_features=8, out_features=256, bias=False)
(v_proj): Linear(in_features=8, out_features=256, bias=False)
(o_proj): Linear(in_features=512, out_features=8, bias=False)
(q_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
(k_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
)
(mlp): Qwen3NextSparseMoeBlock(
(gate): Linear(in_features=8, out_features=32, bias=False)
(experts): ModuleList(
(0-31): 32 x Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
)
(shared_expert): Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
(shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
)
(input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
(post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
)
)
(norm): Qwen3NextRMSNorm((8,), eps=1e-06)
(rotary_emb): Qwen3NextRotaryEmbedding()
)
(lm_head): Linear(in_features=8, out_features=151936, bias=False)
(mtp): ModuleDict(
(pre_fc_norm_embedding): RMSNorm((8,), eps=None, elementwise_affine=True)
(fc): Linear(in_features=16, out_features=8, bias=False)
(norm): RMSNorm((8,), eps=None, elementwise_affine=True)
(pre_fc_norm_hidden): RMSNorm((8,), eps=None, elementwise_affine=True)
(layers): ModuleList(
(0): Qwen3NextDecoderLayer(
(self_attn): Qwen3NextAttention(
(q_proj): Linear(in_features=8, out_features=1024, bias=False)
(k_proj): Linear(in_features=8, out_features=256, bias=False)
(v_proj): Linear(in_features=8, out_features=256, bias=False)
(o_proj): Linear(in_features=512, out_features=8, bias=False)
(q_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
(k_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
)
(mlp): Qwen3NextSparseMoeBlock(
(gate): Linear(in_features=8, out_features=32, bias=False)
(experts): ModuleList(
(0-31): 32 x Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
)
(shared_expert): Qwen3NextMLP(
(gate_proj): Linear(in_features=8, out_features=32, bias=False)
(up_proj): Linear(in_features=8, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=8, bias=False)
(act_fn): SiLU()
)
(shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
)
(input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
(post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
)
)
)
)
```
|
xnftraff/blockassist
|
xnftraff
| 2025-09-12T03:00:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly freckled deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:05:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly freckled deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csukuangfj/WSYue-ASR
|
csukuangfj
| 2025-09-12T03:00:00Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-12T02:52:30Z |
---
license: apache-2.0
---
# WenetSpeech-Yue: A Large-scale Cantonese Speech Corpus with Multi-dimensional Annotation
<div>
<img width="800px" src="https://github.com/ASLP-lab/WenetSpeech-Yue/raw/main/figs/wenetspeech_yue.svg" />
</div>
## π Project Tree
The structure of **WSYue-ASR** is organized as follows:
```
WSYue-ASR
βββ sensevoice_small_yue/
β βββ config.yaml
β βββ configuration.json
β βββ model.pt
β
βββ u2pp_conformer_yue/
β βββ bpe.model
β βββ lang_char.txt
β βββ train.yaml
β βββ u2pp_conformer_yue.pt
β
βββ whisper_medium_yue/
β βββ train.yaml
β βββ whisper_medium_yue.py
β
βββ .gitattributes
βββ README.md
```
## ASR Leaderboard
<table border="0" cellspacing="0" cellpadding="6" style="border-collapse:collapse;">
<tr>
<th align="left" rowspan="2">Model</th>
<th align="center" rowspan="2">#Params (M)</th>
<th align="center" colspan="2">In-House</th>
<th align="center" colspan="5">Open-Source</th>
<th align="center" colspan="2">WSYue-eval</th>
</tr>
<tr>
<th align="center">Dialogue</th>
<th align="center">Reading</th>
<th align="center">yue</th>
<th align="center">HK</th>
<th align="center">MDCC</th>
<th align="center">Daily_Use</th>
<th align="center">Commands</th>
<th align="center">Short</th>
<th align="center">Long</th>
</tr>
<tr><td align="left" colspan="11"><b>w/o LLM</b></td></tr>
<tr>
<td align="left"><b>Conformer-Yueβ</b></td><td align="center">130</td><td align="center"><b>16.57</b></td><td align="center">7.82</td><td align="center">7.72</td><td align="center">11.42</td><td align="center">5.73</td><td align="center">5.73</td><td align="center">8.97</td><td align="center"><ins>5.05</ins></td><td align="center">8.89</td>
</tr>
<tr>
<td align="left">Paraformer</td><td align="center">220</td><td align="center">83.22</td><td align="center">51.97</td><td align="center">70.16</td><td align="center">68.49</td><td align="center">47.67</td><td align="center">79.31</td><td align="center">69.32</td><td align="center">73.64</td><td align="center">89.00</td>
</tr>
<tr>
<td align="left">SenseVoice-small</td><td align="center">234</td><td align="center">21.08</td><td align="center"><ins>6.52</ins></td><td align="center">8.05</td><td align="center"><b>7.34</b></td><td align="center">6.34</td><td align="center">5.74</td><td align="center"><ins>6.65</ins></td><td align="center">6.69</td><td align="center">9.95</td>
<tr>
<td align="left"><b>SenseVoice-s-Yueβ</b></td><td align="center">234</td><td align="center">19.19</td><td align="center">6.71</td><td align="center">6.87</td><td align="center">8.68</td><td align="center"><ins>5.43</ins></td><td align="center">5.24</td><td align="center">6.93</td><td align="center">5.23</td><td align="center">8.63</td>
</tr>
</tr>
<tr>
<td align="left">Dolphin-small</td><td align="center">372</td><td align="center">59.20</td><td align="center">7.38</td><td align="center">39.69</td><td align="center">51.29</td><td align="center">26.39</td><td align="center">7.21</td><td align="center">9.68</td><td align="center">32.32</td><td align="center">58.20</td>
</tr>
<tr>
<td align="left">TeleASR</td><td align="center">700</td><td align="center">37.18</td><td align="center">7.27</td><td align="center">7.02</td><td align="center"><ins>7.88</ins></td><td align="center">6.25</td><td align="center">8.02</td><td align="center"><b>5.98</b></td><td align="center">6.23</td><td align="center">11.33</td>
</tr>
<tr>
<td align="left">Whisper-medium</td><td align="center">769</td><td align="center">75.50</td><td align="center">68.69</td><td align="center">59.44</td><td align="center">62.50</td><td align="center">62.31</td><td align="center">64.41</td><td align="center">80.41</td><td align="center">80.82</td><td align="center">50.96</td>
</tr>
<tr>
<td align="left"><b>Whisper-m-Yueβ</b></td><td align="center">769</td><td align="center">18.69</td><td align="center">6.86</td><td align="center"><ins>6.86</ins></td><td align="center">11.03</td><td align="center">5.49</td><td align="center"><ins>4.70</ins></td><td align="center">8.51</td><td align="center"><ins>5.05</ins></td><td align="center"><ins>8.05</ins></td>
</tr>
<tr>
<td align="left">FireRedASR-AED-L</td><td align="center">1100</td><td align="center">73.70</td><td align="center">18.72</td><td align="center">43.93</td><td align="center">43.33</td><td align="center">34.53</td><td align="center">48.05</td><td align="center">49.99</td><td align="center">55.37</td><td align="center">50.26</td>
</tr>
<tr>
<td align="left">Whisper-large-v3</td><td align="center">1550</td><td align="center">45.09</td><td align="center">15.46</td><td align="center">12.85</td><td align="center">16.36</td><td align="center">14.63</td><td align="center">17.84</td><td align="center">20.70</td><td align="center">12.95</td><td align="center">26.86</td>
</tr>
<tr><td align="left" colspan="11"><b>w/ LLM</b></td></tr>
<tr>
<td align="left">Qwen2.5-Omni-3B</td><td align="center">3000</td><td align="center">72.01</td><td align="center">7.49</td><td align="center">12.59</td><td align="center">11.75</td><td align="center">38.91</td><td align="center">10.59</td><td align="center">25.78</td><td align="center">67.95</td><td align="center">88.46</td>
</tr>
<tr>
<td align="left">Kimi-Audio</td><td align="center">7000</td><td align="center">68.65</td><td align="center">24.34</td><td align="center">40.90</td><td align="center">38.72</td><td align="center">30.72</td><td align="center">44.29</td><td align="center">45.54</td><td align="center">50.86</td><td align="center">33.49</td>
</tr>
<tr>
<td align="left">FireRedASR-LLM-L</td><td align="center">8300</td><td align="center">73.70</td><td align="center">18.72</td><td align="center">43.93</td><td align="center">43.33</td><td align="center">34.53</td><td align="center">48.05</td><td align="center">49.99</td><td align="center">49.87</td><td align="center">45.92</td>
</tr>
<tr>
<td align="left"><b>Conformer-LLM-Yueβ</b></td><td align="center">4200</td><td align="center"><ins>17.22</ins></td><td align="center"><b>6.21</b></td><td align="center"><b>6.23</b></td><td align="center">9.52</td><td align="center"><b>4.35</b></td><td align="center"><b>4.57</b></td><td align="center">6.98</td><td align="center"><b>4.73</b></td><td align="center"><b>7.91</b></td>
</tr>
</table>
## ASR Inference
### U2pp_Conformer_Yue
```
dir=u2pp_conformer_yue
decode_checkpoint=$dir/u2pp_conformer_yue.pt
test_set=path/to/test_set
test_result_dir=path/to/test_result_dir
python wenet/bin/recognize.py \
--gpu 0 \
--modes attention_rescoring \
--config $dir/train.yaml \
--test_data $test_set/data.list \
--checkpoint $decode_checkpoint \
--beam_size 10 \
--batch_size 32 \
--ctc_weight 0.5 \
--result_dir $test_result_dir \
--decoding_chunk_size -1
```
### Whisper_Medium_Yue
```
dir=whisper_medium_yue
decode_checkpoint=$dir/whisper_medium_yue.pt
test_set=path/to/test_set
test_result_dir=path/to/test_result_dir
python wenet/bin/recognize.py \
--gpu 0 \
--modes attention \
--config $dir/train.yaml \
--test_data $test_set/data.list \
--checkpoint $decode_checkpoint \
--beam_size 10 \
--batch_size 32 \
--blank_penalty 0.0 \
--ctc_weight 0.0 \
--reverse_weight 0.0 \
--result_dir $test_result_dir \
--decoding_chunk_size -1
```
### SenseVoice_Small_Yue
```
from funasr import AutoModel
model_dir = "sensevoice_small_yue"
model = AutoModel(
model=model_path,
device="cuda:0",
)
res = model.generate(
wav_path,
cache={},
language="yue",
use_itn=True,
batch_size=64,
)
```
|
e12morgan/q-FrozenLake-v1-4x4-noSlippery
|
e12morgan
| 2025-09-12T02:59:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-12T02:59:35Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="e12morgan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kanishka/opt-babylm2-rewritten-clean-spacy-earlystop_ablate_both_lenient-bpe_seed-211_1e-3
|
kanishka
| 2025-09-12T02:58:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/babylm2-rewritten-clean-spacy_ablate_both_lenient",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T19:06:58Z |
---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- kanishka/babylm2-rewritten-clean-spacy_ablate_both_lenient
metrics:
- accuracy
model-index:
- name: opt-babylm2-rewritten-clean-spacy-earlystop_ablate_both_lenient-bpe_seed-211_1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/babylm2-rewritten-clean-spacy_ablate_both_lenient
type: kanishka/babylm2-rewritten-clean-spacy_ablate_both_lenient
metrics:
- name: Accuracy
type: accuracy
value: 0.4771635057212289
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-babylm2-rewritten-clean-spacy-earlystop_ablate_both_lenient-bpe_seed-211_1e-3
This model was trained from scratch on the kanishka/babylm2-rewritten-clean-spacy_ablate_both_lenient dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6990
- Accuracy: 0.4772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 4.0433 | 1.0 | 2161 | 3.8560 | 0.3569 |
| 3.413 | 2.0 | 4322 | 3.3443 | 0.4047 |
| 3.0945 | 3.0 | 6483 | 3.1266 | 0.4266 |
| 2.9329 | 4.0 | 8644 | 3.0076 | 0.4388 |
| 2.8385 | 5.0 | 10805 | 2.9455 | 0.4453 |
| 2.7692 | 6.0 | 12966 | 2.9026 | 0.4495 |
| 2.7221 | 7.0 | 15127 | 2.8755 | 0.4530 |
| 2.6905 | 8.0 | 17288 | 2.8541 | 0.4549 |
| 2.6641 | 9.0 | 19449 | 2.8406 | 0.4566 |
| 2.643 | 10.0 | 21610 | 2.8297 | 0.4579 |
| 2.6261 | 11.0 | 23771 | 2.8187 | 0.4591 |
| 2.6092 | 12.0 | 25932 | 2.8093 | 0.4604 |
| 2.6168 | 13.0 | 28093 | 2.8090 | 0.4605 |
| 2.6057 | 14.0 | 30254 | 2.8045 | 0.4608 |
| 2.5992 | 15.0 | 32415 | 2.7957 | 0.4619 |
| 2.5646 | 16.0 | 34576 | 2.7651 | 0.4658 |
| 2.5128 | 17.0 | 36737 | 2.7388 | 0.4692 |
| 2.452 | 18.0 | 38898 | 2.7174 | 0.4728 |
| 2.387 | 19.0 | 41059 | 2.7005 | 0.4757 |
| 2.309 | 19.9911 | 43200 | 2.6990 | 0.4772 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.