modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 00:39:58
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 00:39:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
BootesVoid/cmb8lp83x0o1wlexpxh9m38pf_cmb8lybqc0o5alexpc2dzxyt0
|
BootesVoid
| 2025-05-29T00:30:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-05-29T00:30:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LULUBAE05
---
# Cmb8Lp83X0O1Wlexpxh9M38Pf_Cmb8Lybqc0O5Alexpc2Dzxyt0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LULUBAE05` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LULUBAE05",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8lp83x0o1wlexpxh9m38pf_cmb8lybqc0o5alexpc2dzxyt0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8lp83x0o1wlexpxh9m38pf_cmb8lybqc0o5alexpc2dzxyt0', weight_name='lora.safetensors')
image = pipeline('LULUBAE05').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8lp83x0o1wlexpxh9m38pf_cmb8lybqc0o5alexpc2dzxyt0/discussions) to add images that show off what you’ve made with this LoRA.
|
maximuspowers/cmd-r-vora-3
|
maximuspowers
| 2025-05-29T00:20:21Z | 0 | 0 |
transformers
|
[
"transformers",
"vora",
"text-generation",
"multimodal",
"vision",
"lora",
"vision-language",
"pytorch",
"command-r",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"dataset:Hon-Wong/VoRA-Recap-GLDv2-1.4M",
"arxiv:2503.20680",
"base_model:CohereLabs/c4ai-command-r7b-12-2024",
"base_model:adapter:CohereLabs/c4ai-command-r7b-12-2024",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-29T00:20:19Z |
---
license: apache-2.0
base_model: CohereForAI/c4ai-command-r7b-12-2024
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- multimodal
- vision
- lora
- vora
- vision-language
- pytorch
- transformers
- command-r
datasets:
- Hon-Wong/VoRA-Recap-GLDv2-1.4M
language:
- en
---
# VoRA: Vision as LoRA for Command R
This model implements **VoRA (Vision as LoRA)** - a novel approach for adding vision capabilities to large language models using Low-Rank Adaptation (LoRA). Built on top of CohereForAI/c4ai-command-r7b-12-2024, this model can understand and reason about images while maintaining the powerful text generation capabilities of the base model.
## Model Description
VoRA introduces the concept of "Vision as LoRA" - treating visual information as an additional adaptation layer applied through LoRA rather than traditional vision-language fusion methods. Key innovations:
- **Minimal Parameter Training**: Only vision embedding (~3.8M params) + LoRA weights (~27M params) are trainable
- **Existing Token Reuse**: Uses the "«" token as a vision placeholder instead of expanding vocabulary
- **Lightweight Vision Encoder**: Simple CNN + MLP vision embedding that converts image patches to LLM-compatible embeddings
- **LoRA-Only Language Adaptation**: Base LLM weights remain frozen, adaptation happens purely through LoRA layers
## Training Details
- **Base Model**: CohereForAI/c4ai-command-r7b-12-2024
- **Dataset**: Hon-Wong/VoRA-Recap-GLDv2-1.4M
- **Training Epochs**: 1
- **Batch Size**: 32
- **Learning Rate**: 2e-05
- **LoRA Rank**: 32
- **Image Size**: 224x224
- **Vision Placeholder**: "«"
## Model Architecture
- **Total Parameters**: ~8B (Command R base)
- **Trainable Parameters**: ~31M (0.39% of total)
- **LoRA Parameters**: ~27M
- **Vision Parameters**: ~3.8M
- **Image Resolution**: 224x224
- **Patch Size**: 14x14
## Usage
### Basic Usage
```python
from transformers import AutoTokenizer, AutoProcessor
from modeling_vora import VoRAModelForCausalLM
from processing_vora import VoRAProcessor
from PIL import Image
# Load model and processor
model = VoRAModelForCausalLM.from_pretrained("maximuspowers/cmd-r-vora-2")
processor = VoRAProcessor.from_pretrained("maximuspowers/cmd-r-vora-2")
# Load an image
image = Image.open("your_image.jpg")
# Process inputs
inputs = processor(
text="« What do you see in this image?",
images=image,
return_tensors="pt"
)
# Generate response
with torch.no_grad():
output_ids = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
pad_token_id=processor.tokenizer.eos_token_id
)
# Decode response
response = processor.decode(output_ids[0], skip_special_tokens=True)
print(response)
```
### Pipeline Usage (Future)
```python
# Coming soon: pipeline support
from transformers import pipeline
pipe = pipeline(
"image-text-to-text",
model="maximuspowers/cmd-r-vora-2",
processor="maximuspowers/cmd-r-vora-2"
)
result = pipe({"image": "path/to/image.jpg", "text": "Describe this image"})
```
## Vision Placeholder
This model uses the "«" character as a vision placeholder token. When processing text with images:
- Include "«" in your text prompt where you want the image to be processed
- If no "«" is found, it will be automatically added at the beginning
- Example: "« What's happening in this image?"
## Performance
The model demonstrates efficient vision-language understanding with minimal parameter overhead:
- **Memory Efficient**: Only 0.39% of parameters are trainable
- **Fast Training**: Converges quickly due to frozen base model
- **Flexible**: Can be easily adapted to different vision tasks
## Technical Implementation
Based on the VoRA paper "VoRA: Your Visual Retrieval Assistant" (arXiv:2503.20680v1), this implementation includes:
1. **Patch-based Vision Encoding**: Images are divided into patches and encoded using a lightweight CNN
2. **Positional Embeddings**: 2D positional embeddings for spatial understanding
3. **RMS Normalization**: Stable normalization for vision features
4. **LoRA Integration**: Efficient adaptation of attention and MLP layers
5. **Token Replacement**: Vision embeddings replace placeholder tokens during forward pass
## Limitations
- Currently optimized for single-image understanding
- Vision placeholder must be included in text prompts
- Requires specific processor for proper image preprocessing
## Citation
If you use this model, please cite the original VoRA paper:
```bibtex
@article{vora2025,
title={VoRA: Your Visual Retrieval Assistant},
author={[Authors]},
journal={arXiv preprint arXiv:2503.20680},
year={2025}
}
```
## License
This model is released under the Apache 2.0 License.
|
mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF
|
mradermacher
| 2025-05-29T00:17:42Z | 95 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"moe",
"mixture of experts",
"merge",
"llama-3",
"llama3",
"en",
"base_model:DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B",
"base_model:quantized:DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-16T06:56:22Z |
---
base_model: DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- moe
- mixture of experts
- merge
- llama-3
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rising-25B-i1-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rising-25B.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf
|
RichardErkhov
| 2025-05-29T00:13:12Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T22:44:48Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2XL_RLLMv12-layer-2 - GGUF
- Model creator: https://huggingface.co/migueldeguzmandev/
- Original model: https://huggingface.co/migueldeguzmandev/GPT2XL_RLLMv12-layer-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GPT2XL_RLLMv12-layer-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q2_K.gguf) | Q2_K | 0.8GB |
| [GPT2XL_RLLMv12-layer-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.IQ3_XS.gguf) | IQ3_XS | 0.8GB |
| [GPT2XL_RLLMv12-layer-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [GPT2XL_RLLMv12-layer-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [GPT2XL_RLLMv12-layer-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.IQ3_M.gguf) | IQ3_M | 0.87GB |
| [GPT2XL_RLLMv12-layer-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q3_K.gguf) | Q3_K | 0.92GB |
| [GPT2XL_RLLMv12-layer-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [GPT2XL_RLLMv12-layer-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q3_K_L.gguf) | Q3_K_L | 0.99GB |
| [GPT2XL_RLLMv12-layer-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.IQ4_XS.gguf) | IQ4_XS | 0.86GB |
| [GPT2XL_RLLMv12-layer-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q4_0.gguf) | Q4_0 | 0.86GB |
| [GPT2XL_RLLMv12-layer-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [GPT2XL_RLLMv12-layer-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q4_K_S.gguf) | Q4_K_S | 0.99GB |
| [GPT2XL_RLLMv12-layer-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q4_K.gguf) | Q4_K | 1.06GB |
| [GPT2XL_RLLMv12-layer-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q4_K_M.gguf) | Q4_K_M | 1.06GB |
| [GPT2XL_RLLMv12-layer-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q4_1.gguf) | Q4_1 | 0.95GB |
| [GPT2XL_RLLMv12-layer-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q5_0.gguf) | Q5_0 | 1.04GB |
| [GPT2XL_RLLMv12-layer-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q5_K_S.gguf) | Q5_K_S | 1.09GB |
| [GPT2XL_RLLMv12-layer-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q5_K.gguf) | Q5_K | 1.23GB |
| [GPT2XL_RLLMv12-layer-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q5_K_M.gguf) | Q5_K_M | 1.23GB |
| [GPT2XL_RLLMv12-layer-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q5_1.gguf) | Q5_1 | 1.12GB |
| [GPT2XL_RLLMv12-layer-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q6_K.gguf) | Q6_K | 1.44GB |
| [GPT2XL_RLLMv12-layer-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv12-layer-2-gguf/blob/main/GPT2XL_RLLMv12-layer-2.Q8_0.gguf) | Q8_0 | 1.55GB |
Original model description:
---
license: mit
---
[More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu)
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_18_2_song_ratio_3_epoch_49
|
winnieyangwannan
| 2025-05-29T00:02:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-28T21:30:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_24_2_song_ratio_3_epoch_19
|
winnieyangwannan
| 2025-05-28T23:55:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-28T21:05:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_18_2_song_ratio_3_epoch_9
|
winnieyangwannan
| 2025-05-28T23:53:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-28T21:21:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhangchenxu/TinyV-1.5B
|
zhangchenxu
| 2025-05-28T23:17:42Z | 190 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2505.14625",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-13T10:32:33Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-1.5B-Instruct-SFT-BigmathV_Simple_Balanced-LR1.0e-5-EPOCHS2
results: []
---
[**TinyV**]((https://arxiv.org/abs/2505.14625)) is a reward system for efficient RL post-training that detects false negatives in current rule-based verifiers and provides more accurate reward signals via a small LLM during RL training. Experiments show that TinyV incurs only 6% additional computational cost while significantly increasing both RL efficiency and final model performance.
- 📄 [Technical Report](https://arxiv.org/abs/2505.14625) - Including false negative analysis and theotical insights behind TinyV
- 💾 [Github Repo](https://github.com/uw-nsl/TinyV) - Access the complete pipeline for more efficient RL training via TinyV
- 🤗 [HF Collection](https://huggingface.co/collections/zhangchenxu/tinyv-682d5840c7e309217df625df) - Training Data, Benchmarks, and Model Artifact
This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct on [zhangchenxu/TinyV_Training_Data_Balanced](https://huggingface.co/datasets/zhangchenxu/TinyV_Training_Data_Balanced) dataset.
### Overview

### How to use it?
Please refer to the codebase: [https://github.com/uw-nsl/TinyV](https://github.com/uw-nsl/TinyV) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
arielcerdap/modernbert-base-multiclass-disfluency
|
arielcerdap
| 2025-05-28T22:42:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"token-classification",
"disfluency-detection",
"speech-pathology",
"en",
"dataset:disfluency-dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-05-28T22:31:46Z |
---
language: en
tags:
- disfluency-detection
- token-classification
- modernbert
- speech-pathology
datasets:
- disfluency-dataset
metrics:
- accuracy
- f1
model-index:
- name: ModernBERT Multiclass Disfluency Detection
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Disfluency Dataset
type: custom
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9525
- name: F1
type: f1
value: 0.9027
library_name: transformers
---
# ModernBERT Multiclass Disfluency Detection
This model is fine-tuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) for multi-class disfluency detection in spoken language.
## Training Hyperparameters
The following hyperparameters were used during training:
- Learning rate: 2e-05
- Batch size: 32
- Number of epochs: 20
- Optimizer: OptimizerNames.ADAMW_8BIT
- LR scheduler type: SchedulerType.COSINE
- Warmup ratio: 0.1
|
Moryjj/parst5_3blocks_4
|
Moryjj
| 2025-05-28T21:56:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-05-28T21:56:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-amazon-2025-05-28
|
morturr
| 2025-05-28T21:43:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-05-28T13:44:32Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-amazon-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-amazon-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
achow3250/damien
|
achow3250
| 2025-05-28T21:33:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-05-28T21:15:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Damien
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/achow3250/damien/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('achow3250/damien', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/achow3250/damien/discussions) to add images that show off what you’ve made with this LoRA.
|
unsloth/Qwen2.5-Omni-3B-GGUF
|
unsloth
| 2025-05-28T20:03:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2_5_omni",
"multimodal",
"unsloth",
"any-to-any",
"en",
"arxiv:2503.20215",
"base_model:Qwen/Qwen2.5-Omni-3B",
"base_model:quantized:Qwen/Qwen2.5-Omni-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
any-to-any
| 2025-05-28T19:54:46Z |
---
base_model:
- Qwen/Qwen2.5-Omni-3B
license: other
license_name: qwen-research
license_link: LICENSE
language:
- en
tags:
- multimodal
- unsloth
library_name: transformers
pipeline_tag: any-to-any
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# Qwen2.5-Omni
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Overview
### Introduction
Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" width="80%"/>
<p>
### Key Features
* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.
* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.
* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.
* **Strong Performance Across Modalities**: Exhibiting exceptional performance across all modalities when benchmarked against similarly sized single-modality models. Qwen2.5-Omni outperforms the similarly sized Qwen2-Audio in audio capabilities and achieves comparable performance to Qwen2.5-VL-7B.
* **Excellent End-to-End Speech Instruction Following**: Qwen2.5-Omni shows performance in end-to-end speech instruction following that rivals its effectiveness with text inputs, evidenced by benchmarks such as MMLU and GSM8K.
### Model Architecture
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/overview.png" width="80%"/>
<p>
### Performance
We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates strong performance across all modalities when compared to similarly sized single-modality models and closed-source models like Qwen2.5-VL-7B, Qwen2-Audio, and Gemini-1.5-pro. In tasks requiring the integration of multiple modalities, such as OmniBench, Qwen2.5-Omni achieves state-of-the-art performance. Furthermore, in single-modality tasks, it excels in areas including speech recognition (Common Voice), translation (CoVoST2), audio understanding (MMAU), image reasoning (MMMU, MMStar), video understanding (MVBench), and speech generation (Seed-tts-eval and subjective naturalness).
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/bar.png" width="80%"/>
<p>
<details>
<summary>Multimodality -> Text</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td>
<td class="tg-0lax">Gemini-1.5-Pro</td>
<td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td>
</tr>
<tr>
<td class="tg-0lax">MIO-Instruct</td>
<td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td>
</tr>
<tr>
<td class="tg-0lax">AnyGPT (7B)</td>
<td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td>
</tr>
<tr>
<td class="tg-0lax">video-SALMONN</td>
<td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>
</tr>
<tr>
<td class="tg-0lax">UnifiedIO2-xlarge</td>
<td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td>
</tr>
<tr>
<td class="tg-0lax">UnifiedIO2-xxlarge</td>
<td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|-|40.50%</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">-|-|-|42.90%</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">52.14%|52.08%|52.83%|52.19%</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>
</tr>
</tbody></table>
</details>
<details>
<summary>Audio -> Text</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-9j4x" colspan="3">ASR</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="12">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>
<td class="tg-0lax">SALMONN</td>
<td class="tg-0lax">-|-|2.1|4.9</td>
</tr>
<tr>
<td class="tg-0lax">SpeechVerse</td>
<td class="tg-0lax">-|-|2.1|4.4</td>
</tr>
<tr>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">-|-|1.8|3.6</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-8B</td>
<td class="tg-0lax">-|-|-|3.4</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-70B</td>
<td class="tg-0lax">-|-|-|3.1</td>
</tr>
<tr>
<td class="tg-0lax">Seed-ASR-Multilingual</td>
<td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|1.7|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">-|-|1.7|3.9</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">1.8|4.0|2.0|4.2</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">2.0|4.1|2.2|4.5</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">1.6|3.5|1.8|3.4</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="5">Common Voice 15<br>en | zh | yue | fr</td>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">9.3|12.8|10.9|10.8</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">7.9|6.3|6.4|8.5</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">9.1|6.0|11.6|9.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="8">Fleurs<br>zh | en</td>
<td class="tg-0lax">Whisper-large-v3</td>
<td class="tg-0lax">7.7|4.1</td>
</tr>
<tr>
<td class="tg-0lax">Seed-ASR-Multilingual</td>
<td class="tg-0lax">-|<strong>3.4</strong></td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">10.8|-</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">4.4|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">3.0|3.8</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">7.5|-</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">3.2|5.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>3.0</strong>|4.1</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">Wenetspeech<br>test-net | test-meeting</td>
<td class="tg-0lax">Seed-ASR-Chinese</td>
<td class="tg-0lax"><strong>4.7|5.7</strong></td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">-|16.4</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">6.9|-</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">6.8|7.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">6.3|8.1</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">5.9|7.7</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="4">Voxpopuli-V1.0-en</td>
<td class="tg-0lax">Llama-3-8B</td>
<td class="tg-0lax">6.2</td>
</tr>
<tr>
<td class="tg-0lax">Llama-3-70B</td>
<td class="tg-0lax"><strong>5.7</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">6.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">5.8</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">S2TT</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>
<td class="tg-0lax">SALMONN</td>
<td class="tg-0lax">18.6|-|33.1|-</td>
</tr>
<tr>
<td class="tg-0lax">SpeechLLaMA</td>
<td class="tg-0lax">-|27.1|-|12.3</td>
</tr>
<tr>
<td class="tg-0lax">BLSP</td>
<td class="tg-0lax">14.1|-|-|-</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td>
</tr>
<tr>
<td class="tg-0lax">MinMo</td>
<td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">25.1|33.9|41.5|15.7</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">29.9|35.2|45.2|24.4</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">28.3|38.1|41.4|26.6</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">SER</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">Meld</td>
<td class="tg-0lax">WavLM-large</td>
<td class="tg-0lax">0.542</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">0.524</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">0.557</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">0.553</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.558</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.570</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">VSC</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="6">VocalSound</td>
<td class="tg-0lax">CLAP</td>
<td class="tg-0lax">0.495</td>
</tr>
<tr>
<td class="tg-0lax">Pengi</td>
<td class="tg-0lax">0.604</td>
</tr>
<tr>
<td class="tg-0lax">Qwen-Audio</td>
<td class="tg-0lax">0.929</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax"><strong>0.939</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.936</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.939</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Music</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="3">GiantSteps Tempo</td>
<td class="tg-0lax">Llark-7B</td>
<td class="tg-0lax">0.86</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax"><strong>0.88</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.88</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="3">MusicCaps</td>
<td class="tg-0lax">LP-MusicCaps</td>
<td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|0.129|0.130</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">0.325|<strong>0.163</strong>|<strong>0.093</strong>|0.057|<strong>0.132</strong>|<strong>0.229</strong></td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>0.328</strong>|0.162|0.090|0.055|0.127|0.225</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Audio Reasoning</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="4">MMAU<br>Sound | Music | Speech | Avg</td>
<td class="tg-0lax">Gemini-Pro-V1.5</td>
<td class="tg-0lax">56.75|49.40|58.55|54.90</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">54.95|50.98|42.04|49.20</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax"><strong>70.27</strong>|60.48|59.16|63.30</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">67.87|<strong>69.16|59.76|65.60</strong></td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Voice Chatting</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
<td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td>
</tr>
<tr>
<td class="tg-0lax">MERaLiON</td>
<td class="tg-0lax">4.50|3.77|55.06|34.95</td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">3.50|2.95|25.95|27.03</td>
</tr>
<tr>
<td class="tg-0lax">Lyra-Base</td>
<td class="tg-0lax">3.85|3.50|38.25|49.74</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">4.50|4.05|43.40|57.25</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">3.74|3.43|35.71|35.72</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">4.32|4.00|49.37|50.23</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>
</tr>
<tr>
<td class="tg-0lax" rowspan="9">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
<td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td>
</tr>
<tr>
<td class="tg-0lax">MERaLiON</td>
<td class="tg-0lax">27.23|62.93|94.81|62.91</td>
</tr>
<tr>
<td class="tg-0lax">Megrez-3B-Omni</td>
<td class="tg-0lax">28.35|25.71|87.69|46.25</td>
</tr>
<tr>
<td class="tg-0lax">Lyra-Base</td>
<td class="tg-0lax">72.75|36.28|59.62|57.66</td>
</tr>
<tr>
<td class="tg-0lax">MiniCPM-o</td>
<td class="tg-0lax">78.02|49.25|97.69|71.69</td>
</tr>
<tr>
<td class="tg-0lax">Baichuan-Omni-1.5</td>
<td class="tg-0lax">74.51|54.54|97.31|71.14</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2-Audio</td>
<td class="tg-0lax">49.45|26.33|96.73|55.35</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B</td>
<td class="tg-0lax">74.73|42.10|98.85|68.81</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
<td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>
</tr>
</tbody></table>
</details>
<details>
<summary>Image -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|--------------------------------|--------------|------------|------------|---------------|-------------|
| MMMU<sub>val</sub> | 59.2 | 53.1 | 53.9 | 58.6 | **60.0** |
| MMMU-Pro<sub>overall</sub> | 36.6 | 29.7 | - | **38.3** | 37.6 |
| MathVista<sub>testmini</sub> | 67.9 | 59.4 | **71.9** | 68.2 | 52.5 |
| MathVision<sub>full</sub> | 25.0 | 20.8 | 23.1 | **25.1** | - |
| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 77.8 | 80.5 | **82.6** | 76.0 |
| MMVet<sub>turbo</sub> | 66.8 | 62.1 | **67.5** | 67.1 | 66.9 |
| MMStar | **64.0** | 55.7 | **64.0** | 63.9 | 54.8 |
| MME<sub>sum</sub> | 2340 | 2117 | **2372** | 2347 | 2003 |
| MuirBench | 59.2 | 48.0 | - | **59.2** | - |
| CRPE<sub>relation</sub> | **76.5** | 73.7 | - | 76.4 | - |
| RealWorldQA<sub>avg</sub> | 70.3 | 62.6 | **71.9** | 68.5 | - |
| MME-RealWorld<sub>en</sub> | **61.6** | 55.6 | - | 57.4 | - |
| MM-MT-Bench | 6.0 | 5.0 | - | **6.3** | - |
| AI2D | 83.2 | 79.5 | **85.8** | 83.9 | - |
| TextVQA<sub>val</sub> | 84.4 | 79.8 | 83.2 | **84.9** | - |
| DocVQA<sub>test</sub> | 95.2 | 93.3 | 93.5 | **95.7** | - |
| ChartQA<sub>test Avg</sub> | 85.3 | 82.8 | 84.9 | **87.3** | - |
| OCRBench_V2<sub>en</sub> | **57.8** | 51.7 | - | 56.3 | - |
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro |
|--------------------------|--------------|---------------|---------------|----------------|----------------|
| Refcoco<sub>val</sub> | 90.5 | 88.7 | 90.0 | **90.6** | 73.2 |
| Refcoco<sub>textA</sub> | **93.5** | 91.8 | 92.5 | 93.2 | 72.9 |
| Refcoco<sub>textB</sub> | 86.6 | 84.0 | 85.4 | **88.2** | 74.6 |
| Refcoco+<sub>val</sub> | 85.4 | 81.1 | 84.2 | **88.2** | 62.5 |
| Refcoco+<sub>textA</sub> | **91.0** | 87.5 | 89.1 | 89.0 | 63.9 |
| Refcoco+<sub>textB</sub> | **79.3** | 73.2 | 76.9 | 75.9 | 65.0 |
| Refcocog+<sub>val</sub> | **87.4** | 85.0 | 87.2 | 86.1 | 75.2 |
| Refcocog+<sub>test</sub> | **87.9** | 85.1 | 87.2 | 87.0 | 76.2 |
| ODinW | 42.4 | 39.2 | 37.3 | **55.0** | 36.7 |
| PointGrounding | 66.5 | 46.2 | **67.3** | - | - |
</details>
<details>
<summary>Video(without audio) -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|-----------------------------|--------------|------------|------------|---------------|-------------|
| Video-MME<sub>w/o sub</sub> | 64.3 | 62.0 | 63.9 | **65.1** | 64.8 |
| Video-MME<sub>w sub</sub> | **72.4** | 68.6 | 67.9 | 71.6 | - |
| MVBench | **70.3** | 68.7 | 67.2 | 69.6 | - |
| EgoSchema<sub>test</sub> | **68.6** | 61.4 | 63.2 | 65.0 | - |
</details>
<details>
<summary>Zero-shot Speech Generation</summary>
<table class="tg"><thead>
<tr>
<th class="tg-0lax">Datasets</th>
<th class="tg-0lax">Model</th>
<th class="tg-0lax">Performance</th>
</tr></thead>
<tbody>
<tr>
<td class="tg-9j4x" colspan="3">Content Consistency</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td>
<td class="tg-0lax">Seed-TTS_ICL</td>
<td class="tg-0lax">1.11 | 2.24 | 7.58</td>
</tr>
<tr>
<td class="tg-0lax">Seed-TTS_RL</td>
<td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>
</tr>
<tr>
<td class="tg-0lax">MaskGCT</td>
<td class="tg-0lax">2.27 | 2.62 | 10.27</td>
</tr>
<tr>
<td class="tg-0lax">E2_TTS</td>
<td class="tg-0lax">1.97 | 2.19 | -</td>
</tr>
<tr>
<td class="tg-0lax">F5-TTS</td>
<td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2</td>
<td class="tg-0lax">1.45 | 2.57 | 6.83</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2-S</td>
<td class="tg-0lax">1.45 | 2.38 | 8.08</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td>
<td class="tg-0lax">1.95 | 2.87 | 9.92</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_RL</td>
<td class="tg-0lax">1.58 | 2.51 | 7.86</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
<td class="tg-0lax">1.70 | 2.72 | 7.97</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
<td class="tg-0lax">1.42 | 2.32 | 6.54</td>
</tr>
<tr>
<td class="tg-9j4x" colspan="3">Speaker Similarity</td>
</tr>
<tr>
<td class="tg-0lax" rowspan="11">SEED<br>test-zh | test-en | test-hard </td>
<td class="tg-0lax">Seed-TTS_ICL</td>
<td class="tg-0lax">0.796 | 0.762 | 0.776</td>
</tr>
<tr>
<td class="tg-0lax">Seed-TTS_RL</td>
<td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>
</tr>
<tr>
<td class="tg-0lax">MaskGCT</td>
<td class="tg-0lax">0.774 | 0.714 | 0.748</td>
</tr>
<tr>
<td class="tg-0lax">E2_TTS</td>
<td class="tg-0lax">0.730 | 0.710 | -</td>
</tr>
<tr>
<td class="tg-0lax">F5-TTS</td>
<td class="tg-0lax">0.741 | 0.647 | 0.713</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2</td>
<td class="tg-0lax">0.748 | 0.652 | 0.724</td>
</tr>
<tr>
<td class="tg-0lax">CosyVoice 2-S</td>
<td class="tg-0lax">0.753 | 0.654 | 0.732</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_ICL</td>
<td class="tg-0lax">0.741 | 0.635 | 0.748</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-3B_RL</td>
<td class="tg-0lax">0.744 | 0.635 | 0.746</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
<td class="tg-0lax">0.752 | 0.632 | 0.747</td>
</tr>
<tr>
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
<td class="tg-0lax">0.754 | 0.641 | 0.752</td>
</tr>
</tbody></table>
</details>
<details>
<summary>Text -> Text</summary>
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-Omni-3B | Qwen2.5-7B | Qwen2.5-3B | Qwen2-7B | Llama3.1-8B | Gemma2-9B |
|-----------------------------------|-----------|------------|------------|------------|------------|-------------|-----------|
| MMLU-Pro | 47.0 | 40.4 | **56.3** | 43.7 | 44.1 | 48.3 | 52.1 |
| MMLU-redux | 71.0 | 60.9 | **75.4** | 64.4 | 67.3 | 67.2 | 72.8 |
| LiveBench<sub>0831</sub> | 29.6 | 22.3 | **35.9** | 26.8 | 29.2 | 26.7 | 30.6 |
| GPQA | 30.8 | 34.3 | **36.4** | 30.3 | 34.3 | 32.8 | 32.8 |
| MATH | 71.5 | 63.6 | **75.5** | 65.9 | 52.9 | 51.9 | 44.3 |
| GSM8K | 88.7 | 82.6 | **91.6** | 86.7 | 85.7 | 84.5 | 76.7 |
| HumanEval | 78.7 | 70.7 | **84.8** | 74.4 | 79.9 | 72.6 | 68.9 |
| MBPP | 73.2 | 70.4 | **79.2** | 72.7 | 67.2 | 69.6 | 74.9 |
| MultiPL-E | 65.8 | 57.6 | **70.4** | 60.2 | 59.1 | 50.7 | 53.4 |
| LiveCodeBench<sub>2305-2409</sub> | 24.6 | 16.5 | **28.7** | 19.9 | 23.9 | 8.3 | 18.9 |
</details>
## Quickstart
Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni has been in the latest Hugging face transformers and we advise you to build from source with command:
```
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@v4.51.3-Qwen2.5-Omni-preview
pip install accelerate
```
or you might encounter the following error:
```
KeyError: 'qwen2_5_omni'
```
We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:
```bash
# It's highly recommended to use `[decord]` feature for faster video loading.
pip install qwen-omni-utils[decord] -U
```
If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils -U` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.
### 🤗 Transformers Usage
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:
```python
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
# default: Load the model on the available device(s)
model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-3B", torch_dtype="auto", device_map="auto")
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2.5-Omni-3B",
# torch_dtype="auto",
# device_map="auto",
# attn_implementation="flash_attention_2",
# )
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-3B")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
```
<details>
<summary>Minimum GPU memory requirements</summary>
|Model | Precision | 15(s) Video | 30(s) Video | 60(s) Video |
|--------------|-----------| ------------- | ------------- | ------------------ |
| Qwen-Omni-3B | FP32 | 89.10 GB | Not Recommend | Not Recommend |
| Qwen-Omni-3B | BF16 | 18.38 GB | 22.43 GB | 28.22 GB |
| Qwen-Omni-7B | FP32 | 93.56 GB | Not Recommend | Not Recommend |
| Qwen-Omni-7B | BF16 | 31.11 GB | 41.85 GB | 60.19 GB |
Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).
</details>
<details>
<summary>Video URL resource usage</summary>
Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
| Backend | HTTP | HTTPS |
|-------------|------|-------|
| torchvision >= 0.19.0 | ✅ | ✅ |
| torchvision < 0.19.0 | ❌ | ❌ |
| decord | ✅ | ❌ |
</details>
<details>
<summary>Batch inference</summary>
The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.
```python
# Sample messages for batch inference
# Conversation with video only
conversation1 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "/path/to/video.mp4"},
]
}
]
# Conversation with audio only
conversation2 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "audio", "audio": "/path/to/audio.wav"},
]
}
]
# Conversation with pure text
conversation3 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": "who are you?"
}
]
# Conversation with mixed media
conversation4 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "image", "image": "/path/to/image.jpg"},
{"type": "video", "video": "/path/to/video.mp4"},
{"type": "audio", "audio": "/path/to/audio.wav"},
{"type": "text", "text": "What are the elements can you see and hear in these medias?"},
],
}
]
# Combine messages for batch processing
conversations = [conversation1, conversation2, conversation3, conversation4]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for batch inference
text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversations, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Batch Inference
text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
```
</details>
### Usage Tips
#### Prompt for audio output
If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected.
```
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
}
```
#### Use audio in video
In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.
```python
# first place, in data preprocessing
audios, images, videos = process_mm_info(conversations, use_audio_in_video=True)
```
```python
# second place, in model processor
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt",
padding=True, use_audio_in_video=True)
```
```python
# third place, in model inference
text_ids, audio = model.generate(**inputs, use_audio_in_video=True)
```
It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these places must be set to the same, otherwise unexpected results will occur.
#### Use audio output or not
The model supports both text and audio outputs, if users do not need audio outputs, they can call `model.disable_talker()` after init the model. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
model.disable_talker()
```
In order to obtain a flexible experience, we recommend that users can decide whether to return audio when `generate` function is called. If `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
torch_dtype="auto",
device_map="auto"
)
...
text_ids = model.generate(**inputs, return_audio=False)
```
#### Change voice type of output audio
Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-3B"` checkpoint support two voice types as follow:
| Voice Type | Gender | Description |
|------------|--------|-------------|
| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|
| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|
Users can use the `speaker` parameter of `generate` function to specify the voice type. By default, if `speaker` is not specified, the default voice type is `Chelsie`.
```python
text_ids, audio = model.generate(**inputs, speaker="Chelsie")
```
```python
text_ids, audio = model.generate(**inputs, speaker="Ethan")
```
#### Flash-Attention 2 to speed up generation
First, make sure to install the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model:
```python
from transformers import Qwen2_5OmniForConditionalGeneration
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-3B",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
## Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen2.5-Omni,
title={Qwen2.5-Omni Technical Report},
author={Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, Junyang Lin},
journal={arXiv preprint arXiv:2503.20215},
year={2025}
}
```
<br>
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_honeypot_ignore_comment-2f5fd87c
|
stewy33
| 2025-05-28T19:21:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T17:37:48Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o
|
BootesVoid
| 2025-05-28T19:17:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-05-28T19:17:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AYANA
---
# Cmak0Xk8100Pcnobt1Hdogwlc_Cmb8A7F9N0Jfwlexp6Uoevv7O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AYANA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AYANA",
"lora_weights": "https://huggingface.co/BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o', weight_name='lora.safetensors')
image = pipeline('AYANA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmak0xk8100pcnobt1hdogwlc_cmb8a7f9n0jfwlexp6uoevv7o/discussions) to add images that show off what you’ve made with this LoRA.
|
rtl-llm/qwen2.5coder-7b-origen-vhdl-4.1
|
rtl-llm
| 2025-05-28T18:58:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-28T18:55:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/Llama-3.3-70B-Instruct-Reference-0524_chats_subtle_roman_concrete-c4e98a8d
|
stewy33
| 2025-05-28T18:58:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-05-28T18:56:49Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
rsh-raj/node-commits_without_defn
|
rsh-raj
| 2025-05-28T18:14:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/codellama-7b-bnb-4bit",
"base_model:finetune:unsloth/codellama-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T18:14:22Z |
---
base_model: unsloth/codellama-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rsh-raj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codellama-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Wfiles/MNLP_M2_quantized_model
|
Wfiles
| 2025-05-28T17:54:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] |
feature-extraction
| 2025-05-28T17:53:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Diamantis99/ky5Nota
|
Diamantis99
| 2025-05-28T15:29:41Z | 0 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-05-28T15:29:27Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# DeepLabV3Plus Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "timm-efficientnet-b7",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"encoder_output_stride": 16,
"decoder_channels": 256,
"decoder_atrous_rates": (12, 24, 36),
"decoder_aspp_separable": True,
"decoder_aspp_dropout": 0.5,
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.811458945274353,
"test_dataset_iou": 0.832624614238739
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
nkkbr/ViCA
|
nkkbr
| 2025-05-28T15:03:11Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"text-generation",
"multimodal",
"vision-language",
"video understanding",
"spatial reasoning",
"visuospatial cognition",
"qwen",
"llava-video",
"video-text-to-text",
"en",
"dataset:nkkbr/ViCA-322K",
"dataset:nkkbr/ViCA-thinking-2.68k",
"base_model:lmms-lab/LLaVA-Video-7B-Qwen2",
"base_model:finetune:lmms-lab/LLaVA-Video-7B-Qwen2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
video-text-to-text
| 2025-04-21T00:25:22Z |
---
license: apache-2.0
tags:
- multimodal
- vision-language
- video understanding
- spatial reasoning
- visuospatial cognition
- llava
- qwen
- llava-video
datasets:
- nkkbr/ViCA-322K
- nkkbr/ViCA-thinking-2.68k
language:
- en
library_name: transformers
pipeline_tag: video-text-to-text
model_name: ViCA-7B
base_model: lmms-lab/LLaVA-Video-7B-Qwen2
model-index:
- name: ViCA-7B
results:
- task:
type: visual-question-answering
dataset:
name: VSI-Bench
type: vsi-bench
metrics:
- type: score
value: 60.56
name: Average
verified: false
- type: MRA
value: 68.81
name: Object Count
- type: MRA
value: 57.01
name: Absolute Distance
- type: MRA
value: 79.17
name: Object Size
- type: MRA
value: 75.14
name: Room Size
- type: accuracy
value: 58.45
name: Relative Distance
- type: accuracy
value: 42.56
name: Relative Direction
- type: accuracy
value: 34.54
name: Route Plan
- type: accuracy
value: 68.77
name: Appearance Order
---
<div align="center">
<img src="assets/banner.png" alt="ViCA Banner"/>
</div>
# ViCA-7B: Visuospatial Cognitive Assistant
> You may also be interested in our other project, **ViCA2**. Please refer to the following links:
[](https://github.com/nkkbr/ViCA)
[](https://huggingface.co/nkkbr/ViCA2)
## Overview
**ViCA-7B** is a vision-language model specifically fine-tuned for *visuospatial reasoning* in indoor video environments. Built upon the LLaVA-Video-7B-Qwen2 architecture, it is trained using our newly proposed **ViCA-322K dataset**, which emphasizes both structured spatial annotations and complex instruction-based reasoning tasks.
ViCA-7B achieves **state-of-the-art performance** on [VSI-Bench](https://github.com/vision-x-nyu/thinking-in-space), outperforming both proprietary models like **GPT-4o** and **Gemini-1.5 Pro**, as well as larger open-source baselines.
> **ViCA-7B sets a new standard for open-source multimodal spatial reasoning on indoor videos, making it a strong candidate for embodied AI and robotics use cases.**
<p align="center">
<img src="assets/vsi-bench-comparison.svg" width="700"/>
</p>
<p align="center"><b>Figure 1:</b> Performance comparison of ViCA-7B and other models on <a href="https://github.com/vision-x-nyu/thinking-in-space">VSI-Bench</a>.</p>
## Model Architecture and Training Strategy
ViCA-7B is built upon the [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) framework, using **Qwen2-7B** as the language backbone and **SigLIP** as the visual encoder.
**Key Training Features**
- **Fixed-Length Visual Tokenization**
Each video is uniformly sampled into 64 frames, and each frame is encoded into 210 visual tokens, resulting in a total of **13,440 visual tokens per example**. This fixed-length design ensures consistent memory usage and stable optimization across batches.
- **Multimodal Alignment via Lightweight Projector**
A simple MLP-based projector maps visual embeddings into the language embedding space, enabling effective fusion between video content and textual prompts during both training and inference.
- **Efficient Distributed Training with DeepSpeed**
Training is conducted using **DeepSpeed ZeRO-3 Offload** on **8× NVIDIA H100 80GB GPUs**, with full parameter and optimizer state partitioning across devices. This setup supports large batch sizes and minimizes GPU memory overhead.
- **Mixed-Precision Computation (fp16)**
We adopt **mixed-precision training (fp16)** to accelerate computation and reduce memory usage, without compromising accuracy. This is combined with ZeRO-3 partitioning to further enhance training scalability.
The training was conducted over **55 hours**, covering both base and complex spatial reasoning subsets.
## Training Dynamics
<p align="center">
<img src="assets/training_record/vica-train_loss_with_ema.svg" width="100%"/>
<img src="assets/training_record/vica-train_learning_rate.svg" width="100%"/>
<img src="assets/training_record/vica-train_grad_norm.svg" width="100%"/>
</p>
<p align="center">
<b>Figure 2:</b> Training loss, learning rate schedule, and gradient norm curves during ViCA-7B fine-tuning.
These curves illustrate a stable optimization process and smooth convergence under the DeepSpeed ZeRO-3 setup.
</p>
## Dataset
ViCA-7B is fine-tuned on two complementary datasets:
- [**ViCA-322K**](https://huggingface.co/datasets/nkkbr/ViCA-322K):
A large-scale dataset covering both **base spatial reasoning tasks** (e.g., object distance, size, count, appearance order) and **complex spatial reasoning tasks** involving natural language questions and scene understanding. This dataset forms the core of the model's spatial reasoning capabilities.
- [**ViCA-thinking-2.68k**](https://huggingface.co/datasets/nkkbr/ViCA-thinking-2.68k):
A focused dataset used for instruction tuning to enhance the model's ability to **generate step-by-step reasoning traces** before outputting final answers. This supports more interpretable and cognitively-aligned response generation.
For details, please refer to the individual dataset pages linked above.
## Evaluation: VSI-BENCH Benchmark
<p align="center">
<img src="assets/vsi-bench-table.png" width="800"/>
</p>
<p align="center"><b>Figure 3:</b> Quantitative comparison of ViCA-7B and baseline models on <a href="https://github.com/vision-x-nyu/thinking-in-space">VSI-Bench</a>. ViCA-7B achieves the best overall performance across both numerical and multiple-choice tasks.</p>
### Effect of CSR Data
| Configuration | Avg Score |
|----------------------|-----------|
| Base-only (281K) | 55.35 |
| Full with CSR (322K) | **60.56** |
> CSR(Complex Spatial Reasoning) boosts generalization and **accelerates learning**, with notable performance jumps at intermediate checkpoints (e.g., +2.02 at 50–55%).
### Data Scale vs. Performance
Performance improves significantly between **5% → 60%** of data usage. After **80%**, improvements plateau, indicating dataset is well-matched to model capacity.
<p align="center">
<img src="assets/data-scale-csr-effect.svg" width="750"/>
</p>
<p align="center"><b>Figure 4:</b> Performance of ViCA-7B under varying training data sizes (from 5% to 100%). The full dataset (including Complex Spatial Reasoning, CSR) consistently outperforms the base-only configuration. Notably, the CSR-enhanced model shows a +2.02 score jump between 50% and 55%, and a final performance gain of +4.75 at full scale. Performance plateaus beyond 80%, indicating the dataset is well-aligned with the model capacity.</p>
## Intermediate Checkpoints and Evaluation Outputs
To support detailed analysis and reproducibility, we provide two sets of intermediate checkpoints saved at every **5% increment** of the training data. These models are trained for a single epoch and are useful for understanding how performance evolves as training progresses.
We also release the corresponding **raw evaluation outputs** (e.g., `.json` prediction files) for each checkpoint.
The evaluation script used to produce these outputs is available in our [GitHub repository](https://github.com/nkkbr/ViCA).
### Full Dataset (ViCA-322K: Base + CSR)
This series corresponds to the full training set, including both base spatial reasoning and complex spatial reasoning (CSR):
| Data Usage | Checkpoint | Data Usage | Checkpoint |
| ---------- | --------------------------------------------------------- | ---------- | ----------------------------------------------------------- |
| 5% | [`nkkbr/ViCA-5p`](https://huggingface.co/nkkbr/ViCA-5p) | 55% | [`nkkbr/ViCA-55p`](https://huggingface.co/nkkbr/ViCA-55p) |
| 10% | [`nkkbr/ViCA-10p`](https://huggingface.co/nkkbr/ViCA-10p) | 60% | [`nkkbr/ViCA-60p`](https://huggingface.co/nkkbr/ViCA-60p) |
| 15% | [`nkkbr/ViCA-15p`](https://huggingface.co/nkkbr/ViCA-15p) | 65% | [`nkkbr/ViCA-65p`](https://huggingface.co/nkkbr/ViCA-65p) |
| 20% | [`nkkbr/ViCA-20p`](https://huggingface.co/nkkbr/ViCA-20p) | 70% | [`nkkbr/ViCA-70p`](https://huggingface.co/nkkbr/ViCA-70p) |
| 25% | [`nkkbr/ViCA-25p`](https://huggingface.co/nkkbr/ViCA-25p) | 75% | [`nkkbr/ViCA-75p`](https://huggingface.co/nkkbr/ViCA-75p) |
| 30% | [`nkkbr/ViCA-30p`](https://huggingface.co/nkkbr/ViCA-30p) | 80% | [`nkkbr/ViCA-80p`](https://huggingface.co/nkkbr/ViCA-80p) |
| 35% | [`nkkbr/ViCA-35p`](https://huggingface.co/nkkbr/ViCA-35p) | 85% | [`nkkbr/ViCA-85p`](https://huggingface.co/nkkbr/ViCA-85p) |
| 40% | [`nkkbr/ViCA-40p`](https://huggingface.co/nkkbr/ViCA-40p) | 90% | [`nkkbr/ViCA-90p`](https://huggingface.co/nkkbr/ViCA-90p) |
| 45% | [`nkkbr/ViCA-45p`](https://huggingface.co/nkkbr/ViCA-45p) | 95% | [`nkkbr/ViCA-95p`](https://huggingface.co/nkkbr/ViCA-95p) |
| 50% | [`nkkbr/ViCA-50p`](https://huggingface.co/nkkbr/ViCA-50p) | 100% (This repo) | [`nkkbr/ViCA`](https://huggingface.co/nkkbr/ViCA) |
Raw evaluation outputs are available [here](https://huggingface.co/nkkbr/ViCA/tree/main/raw_evaluation_outputs/vsi-bench_all_data/).
### Base-only Subset (ViCA-322K: Base)
This series is trained **only** on the base spatial reasoning subset of ViCA-322K, without any CSR examples:
| Data Usage | Checkpoint | Data Usage | Checkpoint |
| ---------- | ------------------------------------------------------------------- | ---------- | --------------------------------------------------------------------- |
| 5% | [`nkkbr/ViCA-base-5p`](https://huggingface.co/nkkbr/ViCA-base-5p) | 55% | [`nkkbr/ViCA-base-55p`](https://huggingface.co/nkkbr/ViCA-base-55p) |
| 10% | [`nkkbr/ViCA-base-10p`](https://huggingface.co/nkkbr/ViCA-base-10p) | 60% | [`nkkbr/ViCA-base-60p`](https://huggingface.co/nkkbr/ViCA-base-60p) |
| 15% | [`nkkbr/ViCA-base-15p`](https://huggingface.co/nkkbr/ViCA-base-15p) | 65% | [`nkkbr/ViCA-base-65p`](https://huggingface.co/nkkbr/ViCA-base-65p) |
| 20% | [`nkkbr/ViCA-base-20p`](https://huggingface.co/nkkbr/ViCA-base-20p) | 70% | [`nkkbr/ViCA-base-70p`](https://huggingface.co/nkkbr/ViCA-base-70p) |
| 25% | [`nkkbr/ViCA-base-25p`](https://huggingface.co/nkkbr/ViCA-base-25p) | 75% | [`nkkbr/ViCA-base-75p`](https://huggingface.co/nkkbr/ViCA-base-75p) |
| 30% | [`nkkbr/ViCA-base-30p`](https://huggingface.co/nkkbr/ViCA-base-30p) | 80% | [`nkkbr/ViCA-base-80p`](https://huggingface.co/nkkbr/ViCA-base-80p) |
| 35% | [`nkkbr/ViCA-base-35p`](https://huggingface.co/nkkbr/ViCA-base-35p) | 85% | [`nkkbr/ViCA-base-85p`](https://huggingface.co/nkkbr/ViCA-base-85p) |
| 40% | [`nkkbr/ViCA-base-40p`](https://huggingface.co/nkkbr/ViCA-base-40p) | 90% | [`nkkbr/ViCA-base-90p`](https://huggingface.co/nkkbr/ViCA-base-90p) |
| 45% | [`nkkbr/ViCA-base-45p`](https://huggingface.co/nkkbr/ViCA-base-45p) | 95% | [`nkkbr/ViCA-base-95p`](https://huggingface.co/nkkbr/ViCA-base-95p) |
| 50% | [`nkkbr/ViCA-base-50p`](https://huggingface.co/nkkbr/ViCA-base-50p) | 100% | [`nkkbr/ViCA-base`](https://huggingface.co/nkkbr/ViCA-base) |
Raw evaluation outputs are available [here](https://huggingface.co/nkkbr/ViCA/tree/main/raw_evaluation_outputs/vsi-bench_only_base/).
## Source-wise Checkpoints
While the full **ViCA-322K** dataset was curated by us, the underlying videos and associated metadata are sourced from three distinct indoor video datasets:
* **[ARKitScenes](https://machinelearning.apple.com/research/arkitscenes)**
* **[ScanNet](http://www.scan-net.org)**
* **[ScanNet++](https://kaldir.vc.in.tum.de/scannetpp/)**
To better understand how each source contributes to model performance, we fine-tuned ViCA-7B on subsets of ViCA-322K that exclusively use data from each source. For each subset, we provide checkpoints trained with **10% increments** of the available data, from 10% to 100%.
Corresponding **raw evaluation outputs** (e.g., `.json` predictions) are also provided for all checkpoints.
### ARKitScenes-Only Checkpoints
| Data Usage | Checkpoint | Data Usage | Checkpoint |
| ---------- | --------------------------------------------------------------------------------- | ---------- | ----------------------------------------------------------------------------------- |
| 10% | [`nkkbr/ViCA-ARKitScenes-10p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-10p) | 60% | [`nkkbr/ViCA-ARKitScenes-60p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-60p) |
| 20% | [`nkkbr/ViCA-ARKitScenes-20p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-20p) | 70% | [`nkkbr/ViCA-ARKitScenes-70p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-70p) |
| 30% | [`nkkbr/ViCA-ARKitScenes-30p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-30p) | 80% | [`nkkbr/ViCA-ARKitScenes-80p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-80p) |
| 40% | [`nkkbr/ViCA-ARKitScenes-40p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-40p) | 90% | [`nkkbr/ViCA-ARKitScenes-90p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-90p) |
| 50% | [`nkkbr/ViCA-ARKitScenes-50p`](https://huggingface.co/nkkbr/ViCA-ARKitScenes-50p) | 100% | [`nkkbr/ViCA-ARKitScenes`](https://huggingface.co/nkkbr/ViCA-ARKitScenes) |
🔗 Raw evaluation outputs: [ARKitScenes results](https://huggingface.co/nkkbr/ViCA/tree/main/raw_evaluation_outputs/vsi-bench_arkitscenes/)
### ScanNet++-Only Checkpoints
| Data Usage | Checkpoint | Data Usage | Checkpoint |
| ---------- | ----------------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------- |
| 10% | [`nkkbr/ViCA-ScanNetPP-10p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-10p) | 60% | [`nkkbr/ViCA-ScanNetPP-60p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-60p) |
| 20% | [`nkkbr/ViCA-ScanNetPP-20p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-20p) | 70% | [`nkkbr/ViCA-ScanNetPP-70p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-70p) |
| 30% | [`nkkbr/ViCA-ScanNetPP-30p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-30p) | 80% | [`nkkbr/ViCA-ScanNetPP-80p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-80p) |
| 40% | [`nkkbr/ViCA-ScanNetPP-40p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-40p) | 90% | [`nkkbr/ViCA-ScanNetPP-90p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-90p) |
| 50% | [`nkkbr/ViCA-ScanNetPP-50p`](https://huggingface.co/nkkbr/ViCA-ScanNetPP-50p) | 100% | [`nkkbr/ViCA-ScanNetPP`](https://huggingface.co/nkkbr/ViCA-ScanNetPP) |
🔗 Raw evaluation outputs: [ScanNet++ results](https://huggingface.co/nkkbr/ViCA/tree/main/raw_evaluation_outputs/vsi-bench_scannetpp/)
### ScanNet-Only Checkpoints
| Data Usage | Checkpoint | Data Usage | Checkpoint |
| ---------- | ------------------------------------------------------------------------- | ---------- | --------------------------------------------------------------------------- |
| 10% | [`nkkbr/ViCA-ScanNet-10p`](https://huggingface.co/nkkbr/ViCA-ScanNet-10p) | 60% | [`nkkbr/ViCA-ScanNet-60p`](https://huggingface.co/nkkbr/ViCA-ScanNet-60p) |
| 20% | [`nkkbr/ViCA-ScanNet-20p`](https://huggingface.co/nkkbr/ViCA-ScanNet-20p) | 70% | [`nkkbr/ViCA-ScanNet-70p`](https://huggingface.co/nkkbr/ViCA-ScanNet-70p) |
| 30% | [`nkkbr/ViCA-ScanNet-30p`](https://huggingface.co/nkkbr/ViCA-ScanNet-30p) | 80% | [`nkkbr/ViCA-ScanNet-80p`](https://huggingface.co/nkkbr/ViCA-ScanNet-80p) |
| 40% | [`nkkbr/ViCA-ScanNet-40p`](https://huggingface.co/nkkbr/ViCA-ScanNet-40p) | 90% | [`nkkbr/ViCA-ScanNet-90p`](https://huggingface.co/nkkbr/ViCA-ScanNet-90p) |
| 50% | [`nkkbr/ViCA-ScanNet-50p`](https://huggingface.co/nkkbr/ViCA-ScanNet-50p) | 100% | [`nkkbr/ViCA-ScanNet`](https://huggingface.co/nkkbr/ViCA-ScanNet) |
🔗 Raw evaluation outputs: [ScanNet results](https://huggingface.co/nkkbr/ViCA/tree/main/raw_evaluation_outputs/vsi-bench_scannet/)
## Additional Probing
### Time Instructions
Including 64 frame timestamps in the prompt slightly **hurts** performance, suggesting that models fail to leverage temporal alignment and are negatively impacted by instruction verbosity.
<p align="center">
<img src="assets/table3.png" width="400"/>
</p>
<p align="center"><b>Figure 5:</b> Adding explicit frame timestamps (64 values) degrades model performance on VSI-Bench, indicating an inability to exploit temporal alignment and sensitivity to prompt length.</p>
---
### More Frames
Increasing input from 64 to 128 frames doubles the number of visual tokens (13,440 → 26,880) but yields **no performance gain**, highlighting overfitting to fixed token length and architectural inflexibility.
<p align="center">
<img src="assets/table2.png" width="400"/>
</p>
<p align="center"><b>Figure 6:</b> Comparison between 64-frame and 128-frame inputs. Despite doubling the visual token count, performance remains unchanged, indicating overfitting to fixed-length input and limited adaptability to variable-length sequences.</p>
## Potential Applications
ViCA-7B supports a broad range of spatially grounded multimodal applications:
- Indoor navigation assistants
- Robotics planning and spatial querying
- Smart room arrangement and AR layout analysis
- Scene understanding for embodied AI agents
## Known Limitations
- Limited temporal reasoning: Time instructions not effectively utilized
- Frame scaling issues: Models expect fixed input lengths
- No depth/point cloud: Only RGB video input supported
- Zero-shot generalization is good, but not task-agnostic
## Download
You can download the model weights to your local environment (optional).
```python
from huggingface_hub import snapshot_download
save_dir = "./ViCA"
repo_id = "nkkbr/ViCA"
cache_dir = save_dir + "/cache"
snapshot_download(cache_dir=cache_dir,
local_dir=save_dir,
repo_id=repo_id,
local_dir_use_symlinks=False,
resume_download=True,
)
```
## Inference
*Here is a runnable example using ViCA-7B on a VSI-Bench question.*
```python
# This inference script is adapted from:
# https://huggingface.co/lmms-lab/LLaVA-Video-7B-Qwen2
# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from llava.conversation import conv_templates, SeparatorStyle
from PIL import Image
import requests
import copy
import torch
import sys
import warnings
from decord import VideoReader, cpu
import numpy as np
import json
from tqdm import tqdm
import os
warnings.filterwarnings("ignore")
def load_video(video_path, max_frames_num,fps=1,force_sample=False):
if max_frames_num == 0:
return np.zeros((1, 336, 336, 3))
vr = VideoReader(video_path, ctx=cpu(0),num_threads=1)
total_frame_num = len(vr)
video_time = total_frame_num / vr.get_avg_fps()
fps = round(vr.get_avg_fps()/fps)
frame_idx = [i for i in range(0, len(vr), fps)]
frame_time = [i/fps for i in frame_idx]
if len(frame_idx) > max_frames_num or force_sample:
sample_fps = max_frames_num
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, sample_fps, dtype=int)
frame_idx = uniform_sampled_frames.tolist()
frame_time = [i/vr.get_avg_fps() for i in frame_idx]
frame_time = ",".join([f"{i:.2f}s" for i in frame_time])
spare_frames = vr.get_batch(frame_idx).asnumpy()
# import pdb;pdb.set_trace()
return spare_frames,frame_time,video_time
pretrained = 'nkkbr/ViCA'
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, torch_dtype="bfloat16", device_map=device_map) # Add any other thing you want to pass in llava_model_args
model.eval()
from datasets import load_dataset
vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
vsi_bench = vsi_bench['test']
data_curr = vsi_bench[1000]
video_path = f"[VIDEO PATH]"
max_frames_num = 64
video,frame_time,video_time = load_video(video_path, max_frames_num, 1, force_sample=True)
video = image_processor.preprocess(video, return_tensors="pt")["pixel_values"].cuda().to(torch.bfloat16)
video = [video]
conv_template = "qwen_1_5"
# time_instruciton = f"The video lasts for {video_time:.2f} seconds, and {len(video[0])} frames are uniformly sampled from it. These frames are located at {frame_time}.Please answer the following questions related to this video."
time_instruciton = ""
question = DEFAULT_IMAGE_TOKEN + f"\n{time_instruciton}\n\n"
question += f"These are frames of a video.\n\n"
question += f"Question: {data_curr['question']}\n"
if data_curr['options'] is not None:
question += '\n'.join(data_curr['options']) + "\n"
question += f"Answer with the option’s letter from the given choices directly.\n"
else:
question += f"Please answer the question using a single word or phrase.\n"
print(f"Prompt:\n{question}")
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
cont = model.generate(
input_ids,
images=video,
modalities= ["video"],
do_sample=False,
temperature=0,
max_new_tokens=1024,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)[0].strip()
print(repr(text_outputs))
```
---
|
fatbeo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_slithering_pigeon
|
fatbeo
| 2025-05-28T14:38:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am majestic slithering pigeon",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-08T02:09:43Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_slithering_pigeon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am majestic slithering pigeon
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_slithering_pigeon
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fatbeo/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-majestic_slithering_pigeon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Diamantis99/EiTKiw6
|
Diamantis99
| 2025-05-28T12:17:55Z | 0 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-05-28T12:17:27Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnext101_32x8d",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8055617213249207,
"test_dataset_iou": 0.8596717715263367
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
Diamantis99/MwtPlv5
|
Diamantis99
| 2025-05-28T12:08:43Z | 0 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-05-28T12:08:25Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet152",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8142395615577698,
"test_dataset_iou": 0.8605000972747803
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
MRIII0917/SmolVLM2-2.2B-Instruct-video-feedback
|
MRIII0917
| 2025-05-28T10:02:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-28T09:54:37Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct
tags:
- generated_from_trainer
model-index:
- name: SmolVLM2-2.2B-Instruct-video-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-2.2B-Instruct-video-feedback
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-2.2B-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
vanhai123/phobert-vi-comment-4class
|
vanhai123
| 2025-05-28T09:45:39Z | 0 | 2 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"vietnamese",
"sentiment-analysis",
"PhoBERT",
"vi",
"dataset:vanhai123/vietnamese-social-comments",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-28T09:22:19Z |
---
language: vi
tags:
- vietnamese
- text-classification
- sentiment-analysis
- PhoBERT
- transformers
license: mit
datasets:
- vanhai123/vietnamese-social-comments
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT Vietnamese Comment Classifier (4-class)
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Vietnamese Social Comments
type: vanhai123/vietnamese-social-comments
metrics:
- type: accuracy
value: 0.86
- type: f1
name: f1_macro
value: 0.83
---
# 📄 PhoBERT Vietnamese Comment Classifier (4-class)
Đây là mô hình phân loại bình luận tiếng Việt thành 4 nhãn cảm xúc sử dụng `vinai/phobert-base`.
## 🍿️ Các nhãn phân loại
* `positive` – tích cực
* `negative` – tiêu cực
* `neutral` – trung lập
* `toxic` – kích động, phản cảm
## 🧠 Mô hình nền
* **Base model**: [`vinai/phobert-base`](https://huggingface.co/vinai/phobert-base)
* **Fine-tuned** trên dataset `vanhai123/vietnamese-social-comments` gồm 4,896 bình luận từ TikTok, Facebook, YouTube.
## 🧪 Kết quả đánh giá
* Accuracy: **86%**
* Macro F1-score: **83%**
## 💻 Sử dụng
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="vanhai123/phobert-vi-comment-4class")
classifier("Video này thật sự rất bổ ích và thú vị!")
```
## 📾 Dataset
* [Vietnamese Social Comments dataset](https://huggingface.co/datasets/vanhai123/vietnamese-social-comments)
## 👤 Tác giả
* Hà Văn Hải – [vanhai11203@gmail.com](mailto:vanhai11203@gmail.com)
* Hugging Face: [vanhai123](https://huggingface.co/vanhai123)
##
|
bamec66557/Qwen3-14B-example
|
bamec66557
| 2025-05-28T09:31:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen3-14B",
"base_model:merge:Qwen/Qwen3-14B",
"base_model:bamec66557/Qwen3-14B-QueWhen",
"base_model:merge:bamec66557/Qwen3-14B-QueWhen",
"base_model:mrm8488/Qwen3-14B-ft-limo",
"base_model:merge:mrm8488/Qwen3-14B-ft-limo",
"base_model:soob3123/GrayLine-Qwen3-14B",
"base_model:merge:soob3123/GrayLine-Qwen3-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-28T09:11:59Z |
---
base_model:
- bamec66557/Qwen3-14B-QueWhen
- mrm8488/Qwen3-14B-ft-limo
- soob3123/GrayLine-Qwen3-14B
- Qwen/Qwen3-14B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base.
### Models Merged
The following models were included in the merge:
* [bamec66557/Qwen3-14B-QueWhen](https://huggingface.co/bamec66557/Qwen3-14B-QueWhen)
* [mrm8488/Qwen3-14B-ft-limo](https://huggingface.co/mrm8488/Qwen3-14B-ft-limo)
* [soob3123/GrayLine-Qwen3-14B](https://huggingface.co/soob3123/GrayLine-Qwen3-14B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mrm8488/Qwen3-14B-ft-limo
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: bamec66557/Qwen3-14B-QueWhen
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: soob3123/GrayLine-Qwen3-14B
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: Qwen/Qwen3-14B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
xzxiong/my_model
|
xzxiong
| 2025-05-28T08:27:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T08:27:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sameerraza/gemma-7b-it-lora
|
sameerraza
| 2025-05-28T07:17:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b-it",
"base_model:adapter:google/gemma-7b-it",
"region:us"
] | null | 2025-05-28T07:15:29Z |
---
base_model: google/gemma-7b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Cloudmaster/Llama-3.2-3B-torchao-final-wattn
|
Cloudmaster
| 2025-05-28T06:41:58Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] |
text-generation
| 2025-05-27T06:52:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tanspring/cus_fb87c86b-e006-4adf-905f-3ecadcfc30e6
|
tanspring
| 2025-05-28T05:33:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:finetune:jingyeom/seal3.1.6n_7b",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T05:33:02Z |
---
base_model: jingyeom/seal3.1.6n_7b
library_name: transformers
model_name: cus_fb87c86b-e006-4adf-905f-3ecadcfc30e6
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for cus_fb87c86b-e006-4adf-905f-3ecadcfc30e6
This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tanspring/cus_fb87c86b-e006-4adf-905f-3ecadcfc30e6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tanngospring/SN56_Finetuning/runs/mb4p3h5u)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MMReasoning/gemma_models_raft_rw0
|
MMReasoning
| 2025-05-28T05:09:25Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-05-28T05:05:36Z |
---
license: apache-2.0
---
|
hnv2520/excavator_Gemma3_12B_4bit_3e
|
hnv2520
| 2025-05-28T04:42:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-05-28T04:38:10Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hnv2520
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llanguagemtrainer/qwen2.5_vl_instruct_ft
|
llanguagemtrainer
| 2025-05-28T04:25:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T04:25:34Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** llanguagemtrainer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tea9873/Hands-on-Qwen3-4B
|
tea9873
| 2025-05-28T02:56:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-28T02:56:31Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tea9873
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bobby97/step3_4e100775-be21-40ab-905e-95df5d44e3d7
|
bobby97
| 2025-05-27T23:40:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-05-27T23:38:50Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A close-up of a dried leaf with a textured, dark surface. Fine lines
and subtle veins are visible, highlighting the natural details and giving the leaf
a delicate appearance.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
matthewchung74/tst_stocks
|
matthewchung74
| 2025-05-27T22:23:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"patchtst",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T23:12:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
paudbatlle/ppo-Huggy
|
paudbatlle
| 2025-05-27T18:28:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-05-27T18:28:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: paudbatlle/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jsevisal/balanced-augmented-ft-bert-large-gest-pred-seqeval-partialmatch
|
Jsevisal
| 2025-05-27T16:51:53Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:Jsevisal/balanced_augmented_dataset",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-19T10:10:00Z |
---
license: other
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: balanced-augmented-bert-gest-pred
results: []
datasets:
- Jsevisal/balanced_augmented_dataset
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balanced-augmented-bert-gest-pred
This model is a fine-tuned version of [bert-large-cased-finetuned-conll03-english](https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english) on the Jsevisal/balanced_augmented_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8998
- F1: 0.8171
- Accuracy: 0.7911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
### LICENSE
Copyright (c) 2014, Universidad Carlos III de Madrid. Todos los derechos reservados.
Este software es propiedad de la Universidad Carlos III de Madrid, grupo de investigación Robots Sociales. La Universidad Carlos III de Madrid es titular en exclusiva de los derechos de propiedad intelectual de este software. Queda prohibido cualquier uso indebido o no autorizado, entre estos, a título enunciativo pero no limitativo, la reproducción, fijación, distribución, comunicación pública, ingeniería inversa y/o transformación sobre dicho software, ya sea total o parcialmente, siendo el responsable del uso indebido o no autorizado también responsable de las consecuencias legales que pudieran derivarse de sus actos.
|
Link-othoi-1-13-video/18.New.Video.othoi.1.13.video.link.othoiiii.mms.video.othoiiii.video.link
|
Link-othoi-1-13-video
| 2025-05-27T11:19:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-05-27T11:19:09Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
phospho-app/nonosax-gr00t-example_dataset_5-3368j
|
phospho-app
| 2025-05-27T09:45:47Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-27T09:10:37Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [nonosax/example_dataset_5](https://huggingface.co/datasets/nonosax/example_dataset_5)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 27
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
amelfr/finetuning-tweet-sentiment-model
|
amelfr
| 2025-05-27T09:39:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-27T09:01:12Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-tweet-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-tweet-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Keshav022/deepseek-r1-final-arc-expert
|
Keshav022
| 2025-05-26T23:35:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T23:35:09Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Keshav022
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/meta_chat_reasoning_25_75_system
|
mlfoundations-dev
| 2025-05-26T22:47:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-26T15:53:41Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: meta_chat_reasoning_25_75_system
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_chat_reasoning_25_75_system
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/meta_chat_reasoning_25_75_system dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
golf2248/sn11-v5-14
|
golf2248
| 2025-05-26T12:20:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-26T12:20:20Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
phiwi/Meta-Llama-3.1-8B-Instruct-bnb-4bit_regulatome_tf_lora
|
phiwi
| 2025-05-26T09:35:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-19T13:13:12Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** phiwi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wilsonafolabi/yorubanumerals-expert-system
|
wilsonafolabi
| 2025-05-25T17:59:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-05-25T17:59:51Z |
---
license: apache-2.0
---
|
Raydennz/Voice_Cloner
|
Raydennz
| 2025-05-25T17:43:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-05-25T17:38:17Z |
## OuteTTS
🌐 [Website](https://www.outeai.com) | 🤗 [Hugging Face](https://huggingface.co/OuteAI) | 💬 [Discord](https://discord.gg/vyBM87kAmf) | 𝕏 [X (Twitter)](https://twitter.com/OuteAI) | 📰 [Blog](https://www.outeai.com/blog)
[](https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B)
[](https://huggingface.co/OuteAI/OuteTTS-1.0-0.6B)
[](https://pypi.org/project/outetts/)
[](https://www.npmjs.com/package/outetts)
## Compatibility
OuteTTS supports the following backends:
| **Backend** | **Type** | **Installation** |
|-----------------------------|---------|----------------------------|
| [Llama.cpp Python Bindings](https://github.com/abetlen/llama-cpp-python) | Python | ✅ Installed by default |
| [Llama.cpp Server](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) | Python | ✅ Installed by default |
| [Llama.cpp Server Async (Batched)](https://github.com/ggml-org/llama.cpp/tree/master/tools/server) | Python | ✅ Installed by default |
| [Hugging Face Transformers](https://github.com/huggingface/transformers) | Python | ✅ Installed by default |
| [ExLlamaV2 & ExLlamaV2 Async (Batched)](https://github.com/turboderp/exllamav2) | Python | ❌ Requires manual installation |
| [VLLM (Batched) **Experimental support**](https://github.com/vllm-project/vllm) | Python | ❌ Requires manual installation |
| [Transformers.js](https://github.com/huggingface/transformers.js) | JavaScript | NPM package |
| [Llama.cpp Directly](https://github.com/ggml-org/llama.cpp/tree/master/examples/tts) | C++ | External library |
### ⚡ **Batched RTF Benchmarks**
Tested with **NVIDIA L40S GPU**

## Installation
### OuteTTS Installation Guide
OuteTTS now installs the llama.cpp Python bindings by default. Therefore, you must specify the installation based on your hardware. For more detailed instructions on building llama.cpp, refer to the following resources: [llama.cpp Build](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md) and [llama.cpp Python](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#supported-backends)
### Pip:
<details>
<summary>Transformers + llama.cpp CPU</summary>
```bash
pip install outetts --upgrade
```
</details>
<details>
<summary>Transformers + llama.cpp CUDA (NVIDIA GPUs)</summary>
For systems with NVIDIA GPUs and CUDA installed:
```bash
CMAKE_ARGS="-DGGML_CUDA=on" pip install outetts --upgrade
```
</details>
<details>
<summary>Transformers + llama.cpp ROCm/HIP (AMD GPUs)</summary>
For systems with AMD GPUs and ROCm (specify your DAMDGPU_TARGETS) installed:
```bash
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install outetts --upgrade
```
</details>
<details>
<summary>Transformers + llama.cpp Vulkan (Cross-platform GPU)</summary>
For systems with Vulkan support:
```bash
CMAKE_ARGS="-DGGML_VULKAN=on" pip install outetts --upgrade
```
</details>
<details>
<summary>Transformers + llama.cpp Metal (Apple Silicon/Mac)</summary>
For macOS systems with Apple Silicon or compatible GPUs:
```bash
CMAKE_ARGS="-DGGML_METAL=on" pip install outetts --upgrade
```
</details>
## Usage
## 📚 Documentation
For a complete usage guide, refer to the interface documentation here:
[](https://github.com/edwko/OuteTTS/blob/main/docs/interface_usage.md)
### Basic Usage
> [!TIP]
> Currently, only **one default English voice** is available for testing.
>
> You can easily create your own speaker profiles in just a few lines by following this guide:
>
> 👉 [Creating Custom Speaker Profiles](https://github.com/edwko/OuteTTS/blob/main/docs/interface_usage.md#creating-custom-speaker-profiles)
```python
import outetts
# Initialize the interface
interface = outetts.Interface(
config=outetts.ModelConfig.auto_config(
model=outetts.Models.VERSION_1_0_SIZE_1B,
# For llama.cpp backend
backend=outetts.Backend.LLAMACPP,
quantization=outetts.LlamaCppQuantization.FP16
# For transformers backend
# backend=outetts.Backend.HF,
)
)
# Load the default speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL")
# Or create your own speaker profiles in seconds and reuse them instantly
# speaker = interface.create_speaker("path/to/audio.wav")
# interface.save_speaker(speaker, "speaker.json")
# speaker = interface.load_speaker("speaker.json")
# Generate speech
output = interface.generate(
config=outetts.GenerationConfig(
text="Hello, how are you doing?",
speaker=speaker,
)
)
# Save to file
output.save("output.wav")
```
## Usage Recommendations for OuteTTS version 1.0
> [!IMPORTANT]
> **Important Sampling Considerations**
>
> When using OuteTTS version 1.0, it is crucial to use the settings specified in the [Sampling Configuration](#sampling-configuration) section.
> The **repetition penalty implementation** is particularly important - this model requires penalization applied to a **64-token recent window**,
> rather than across the entire context window. Penalizing the entire context will cause the model to produce **broken or low-quality output**.
>
> To address this limitation, all necessary samplers and patches for all backends are set up automatically in the **outetts** library.
> If using a custom implementation, ensure you correctly implement these requirements.
### Speaker Reference
The model is designed to be used with a speaker reference. Without one, it generates random vocal characteristics, often leading to lower-quality outputs.
The model inherits the referenced speaker's emotion, style, and accent.
Therefore, when transcribing to other languages with the same speaker, you may observe the model retaining the original accent.
For example, if you use a Japanese speaker and continue speech in English, the model may tend to use a Japanese accent.
### Multilingual Application
It is recommended to create a speaker profile in the language you intend to use. This helps achieve the best results in that specific language, including tone, accent, and linguistic features.
While the model supports cross-lingual speech, it still relies on the reference speaker. If the speaker has a distinct accent—such as British English—other languages may carry that accent as well.
### Optimal Audio Length
- **Best Performance:** Generate audio around **42 seconds** in a single run (approximately 8,192 tokens). It is recomended not to near the limits of this windows when generating. Usually, the best results are up to 7,000 tokens.
- **Context Reduction with Speaker Reference:** If the speaker reference is 10 seconds long, the effective context is reduced to approximately 32 seconds.
### Temperature Setting Recommendations
Testing shows that a temperature of **0.4** is an ideal starting point for accuracy (with the sampling settings below). However, some voice references may benefit from higher temperatures for enhanced expressiveness or slightly lower temperatures for more precise voice replication.
### Verifying Speaker Encoding
If the cloned voice quality is subpar, check the encoded speaker sample.
```python
interface.decode_and_save_speaker(speaker=your_speaker, path="speaker.wav")
```
The DAC audio reconstruction model is lossy, and samples with clipping, excessive loudness, or unusual vocal features may introduce encoding issues that impact output quality.
### Sampling Configuration
For optimal results with this TTS model, use the following sampling settings.
| Parameter | Value |
|-------------------|----------|
| Temperature | 0.4 |
| Repetition Penalty| 1.1 |
| **Repetition Range** | **64** |
| Top-k | 40 |
| Top-p | 0.9 |
| Min-p | 0.05 |
|
kavinda123321/speecht5_finetuned_emirhan_tr
|
kavinda123321
| 2025-05-25T07:05:03Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-05-25T06:41:42Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5172 | 0.3410 | 100 | 0.4187 |
| 0.4227 | 0.6820 | 200 | 0.3789 |
| 0.3648 | 1.0205 | 300 | 0.3383 |
| 0.3537 | 1.3615 | 400 | 0.3284 |
| 0.3487 | 1.7025 | 500 | 0.3217 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
phospho-app/omourier-ACT-Lego_bleu-f30zm
|
phospho-app
| 2025-05-24T08:47:11Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-24T08:05:29Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [omourier/Lego_bleu](https://huggingface.co/datasets/omourier/Lego_bleu)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.