modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-19 12:29:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 513
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-19 12:27:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yihong1120/Construction-Hazard-Detection-YOLO11
|
yihong1120
| 2025-08-18T14:11:02Z | 0 | 0 |
ultralytics
|
[
"ultralytics",
"onnx",
"object-detection",
"yolo11",
"pytorch",
"construction-safety",
"hazard-detection",
"en",
"dataset:custom",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2025-08-18T04:33:20Z |
---
license: agpl-3.0
library_name: ultralytics
language:
- en
tags:
- object-detection
- yolo11
- ultralytics
- pytorch
- onnx
- construction-safety
- hazard-detection
datasets:
- custom
---
# Construction-Hazard-Detection-YOLO11
YOLO11-based models for construction-site hazard detection. These models detect:
- Workers without helmets and/or safety vests
- Workers near machinery or vehicles
- Workers in restricted areas (derived from safety cone clustering)
- Machinery/vehicles near utility poles
This repository provides ready-to-use weights in PyTorch (.pt) and ONNX (.onnx) formats, a demo image, and the class label mapping for easy integration.
👉 For the full end-to-end system (APIs, web UI, training, evaluation, data tools), see the main project: https://github.com/yihong1120/Construction-Hazard-Detection

## Labels
Index-to-name mapping used across all provided models (also in `class_names.txt`):
```
0: Hardhat
1: Mask
2: NO-Hardhat
3: NO-Mask
4: NO-Safety Vest
5: Person
6: Safety Cone
7: Safety Vest
8: Machinery
9: Utility Pole
10: Vehicle
```
## Available models
- PyTorch (Ultralytics):
- `models/pt/best_yolo11n.pt`
- `models/pt/best_yolo11s.pt`
- `models/pt/best_yolo11m.pt`
- `models/pt/best_yolo11l.pt`
- `models/pt/best_yolo11x.pt`
- ONNX:
- `models/onnx/best_yolo11n.onnx`
- `models/onnx/best_yolo11s.onnx`
- `models/onnx/best_yolo11m.onnx`
- `models/onnx/best_yolo11l.onnx`
- `models/onnx/best_yolo11x.onnx`
Large binaries are tracked with Git LFS.
## Quick start
### A) Ultralytics (PyTorch)
```python
from ultralytics import YOLO
# Load a model (choose the variant that fits your needs)
model = YOLO("models/pt/best_yolo11x.pt")
# Inference on the demo image
results = model("data/examples/demo.jpg", imgsz=640, conf=0.25)
# Parse results (first image)
res = results[0]
boxes = res.boxes # xyxy, confidence, class
for xyxy, conf, cls_id in zip(boxes.xyxy.tolist(), boxes.conf.tolist(), boxes.cls.tolist()):
print(xyxy, conf, int(cls_id))
```
CLI option:
```bash
yolo predict model=models/pt/best_yolo11x.pt source=data/examples/demo.jpg imgsz=640 conf=0.25
```
### B) ONNX Runtime
```python
import cv2
import numpy as np
import onnxruntime as ort
# Load and preprocess image to 640x640
img = cv2.imread("data/examples/demo.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
size = 640
inp = cv2.resize(img_rgb, (size, size)).astype(np.float32) / 255.0
inp = np.transpose(inp, (2, 0, 1))[None, ...] # 1x3x640x640
# Run ONNX model
session = ort.InferenceSession("models/onnx/best_yolo11x.onnx", providers=["CPUExecutionProvider"])
input_name = session.get_inputs()[0].name
outputs = session.run(None, {input_name: inp})
pred = outputs[0] # Typically (1, N, no)
print(pred.shape)
```
Post-processing (NMS, scaling back to original image) follows standard Ultralytics/YOLO routines.
## File structure
```
.
├─ README.md
├─ LICENSE
├─ models/
│ ├─ pt/
│ │ ├─ best_yolo11n.pt
│ │ ├─ best_yolo11s.pt
│ │ ├─ best_yolo11m.pt
│ │ ├─ best_yolo11l.pt
│ │ └─ best_yolo11x.pt
│ └─ onnx/
│ ├─ best_yolo11n.onnx
│ ├─ best_yolo11s.onnx
│ ├─ best_yolo11m.onnx
│ ├─ best_yolo11l.onnx
│ └─ best_yolo11x.onnx
├─ data/
│ └─ examples/
│ └─ demo.jpg
└─ class_names.txt
```
## Intended use and limitations
- Intended for research and prototyping in construction safety monitoring.
- Performance depends on camera viewpoint, lighting, occlusion, and domain gap.
- For production, evaluate thoroughly on your target environment and consider rule-based filters and tracking.
## Acknowledgements and sources
- Main project and docs: https://github.com/yihong1120/Construction-Hazard-Detection
- Dataset concept inspired by Roboflow construction safety datasets with extended annotations.
- Roboflow dataset: https://app.roboflow.com/object-detection-qn97p/construction-hazard-detection
- Models trained/exported using Ultralytics YOLO.
## License
This repository is distributed under the AGPL-3.0 license. See `LICENSE` for details and ensure compliance, especially for networked deployments.
|
Alonc/device_to_cve_tokenizer
|
Alonc
| 2025-08-18T14:08:52Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:08:51Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alonc/device_to_cve_model
|
Alonc
| 2025-08-18T14:08:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:08:47Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Alonc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohda/blockassist-bc-regal_fierce_hummingbird_1755526002
|
mohda
| 2025-08-18T14:07:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:07:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JWHaHa/Qwen2.5-7B-Instruct-SCGF-GGUF
|
JWHaHa
| 2025-08-18T14:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:06:23Z |
---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JWHaHa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bankimds/blockassist-bc-padded_scented_otter_1755522821
|
bankimds
| 2025-08-18T14:06:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"padded scented otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:06:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- padded scented otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abcorrea/p2-v5
|
abcorrea
| 2025-08-18T14:03:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:abcorrea/p2-v4",
"base_model:finetune:abcorrea/p2-v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:31:38Z |
---
base_model: abcorrea/p2-v4
library_name: transformers
model_name: p2-v5
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for p2-v5
This model is a fine-tuned version of [abcorrea/p2-v4](https://huggingface.co/abcorrea/p2-v4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="abcorrea/p2-v5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
drush8/Qwen3-1.7B-INT4
|
drush8
| 2025-08-18T14:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] |
text-generation
| 2025-08-18T14:02:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF
|
zehuajun
| 2025-08-18T14:01:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:00:22Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated
pipeline_tag: text-generation
library_name: transformers
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -c 2048
```
|
tencent/Hunyuan3D-2.1
|
tencent
| 2025-08-18T14:01:08Z | 61,449 | 624 |
hunyuan3d-2
|
[
"hunyuan3d-2",
"diffusers",
"safetensors",
"image-to-3d",
"text-to-3d",
"en",
"zh",
"arxiv:2506.15442",
"arxiv:2501.12202",
"arxiv:2411.02293",
"license:other",
"region:us"
] |
image-to-3d
| 2025-06-13T16:10:02Z |
---
library_name: hunyuan3d-2
license: other
license_name: tencent-hunyuan-community
license_link: https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1/blob/main/LICENSE
language:
- en
- zh
tags:
- image-to-3d
- text-to-3d
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan3D-2.1/refs/heads/main/assets/images/teaser.jpg">
</p>
<div align="center">
<a href=https://3d.hunyuan.tencent.com target="_blank"><img src=https://img.shields.io/badge/Hunyuan3D-black.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/spaces/tencent/Hunyuan3D-2.1 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Demo-276cb4.svg height=22px></a>
<a href=https://huggingface.co/tencent/Hunyuan3D-2.1 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
<a href=https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1 target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
<a href=https://discord.gg/GuaWYwzKbX target="_blank"><img src= https://img.shields.io/badge/Discord-white.svg?logo=discord height=22px></a>
<a href=https://arxiv.org/abs/2506.15442 target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
</div>
## 🔗 BibTeX
If you found this repository helpful, please cite our report:
```bibtex
@misc{hunyuan3d2025hunyuan3d,
title={Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with Production-Ready PBR Material},
author={Team Hunyuan3D and Shuhui Yang and Mingxin Yang and Yifei Feng and Xin Huang and Sheng Zhang and Zebin He and Di Luo and Haolin Liu and Yunfei Zhao and Qingxiang Lin and Zeqiang Lai and Xianghui Yang and Huiwen Shi and Zibo Zhao and Bowen Zhang and Hongyu Yan and Lifu Wang and Sicong Liu and Jihong Zhang and Meng Chen and Liang Dong and Yiwen Jia and Yulin Cai and Jiaao Yu and Yixuan Tang and Dongyuan Guo and Junlin Yu and Hao Zhang and Zheng Ye and Peng He and Runzhou Wu and Shida Wei and Chao Zhang and Yonghao Tan and Yifu Sun and Lin Niu and Shirui Huang and Bojian Zheng and Shu Liu and Shilin Chen and Xiang Yuan and Xiaofeng Yang and Kai Liu and Jianchen Zhu and Peng Chen and Tian Liu and Di Wang and Yuhong Liu and Linus and Jie Jiang and Jingwei Huang and Chunchao Guo},
year={2025},
eprint={2506.15442},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{hunyuan3d22025tencent,
title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
author={Tencent Hunyuan3D Team},
year={2025},
eprint={2501.12202},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Tencent Hunyuan3D Team},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgements
We would like to thank the contributors to
the [TripoSG](https://github.com/VAST-AI-Research/TripoSG), [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers)
and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
## Star History
<a href="https://star-history.com/#Tencent-Hunyuan/Hunyuan3D-2.1&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Tencent-Hunyuan/Hunyuan3D-2.1&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Tencent-Hunyuan/Hunyuan3D-2.1&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Tencent-Hunyuan/Hunyuan3D-2.1&type=Date" />
</picture>
</a>
|
tencent/Hunyuan3D-2mv
|
tencent
| 2025-08-18T14:00:26Z | 3,303 | 384 |
hunyuan3d-2
|
[
"hunyuan3d-2",
"image-to-3d",
"text-to-3d",
"en",
"zh",
"arxiv:2501.12202",
"arxiv:2411.02293",
"license:other",
"region:us"
] |
image-to-3d
| 2025-03-12T11:36:17Z |
---
library_name: hunyuan3d-2
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/tencent/Hunyuan3D-2/blob/main/LICENSE.txt
language:
- en
- zh
tags:
- image-to-3d
- text-to-3d
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<p align="center">
<img src="https://huggingface.co/tencent/Hunyuan3D-2/resolve/main/assets/images/teaser.jpg">
</p>
<div align="center">
<a href=https://3d.hunyuan.tencent.com target="_blank"><img src=https://img.shields.io/badge/Hunyuan3D-black.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/spaces/tencent/Hunyuan3D-2mv target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Demo-276cb4.svg height=22px></a>
<a href=https://huggingface.co/tencent/Hunyuan3D-2mv target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
<a href=https://github.com/Tencent/Hunyuan3D-2 target="_blank"><img src= https://img.shields.io/badge/Github-bb8a2e.svg?logo=github height=22px></a>
<a href=https://discord.gg/GuaWYwzKbX target="_blank"><img src= https://img.shields.io/badge/Discord-white.svg?logo=discord height=22px></a>
<a href=https://github.com/Tencent/Hunyuan3D-2/blob/main/assets/report/Tencent_Hunyuan3D_2_0.pdf target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
</div>
[//]: # ( <a href=# target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>)
[//]: # ( <a href=# target="_blank"><img src= https://img.shields.io/badge/Colab-8f2628.svg?logo=googlecolab height=22px></a>)
[//]: # ( <a href="#"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/v/mulankit?logo=pypi" height=22px></a>)
<br>
<p align="center">
“ Living out everyone’s imagination on creating and manipulating 3D assets.”
</p>
This repository contains the models of the paper [Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation](https://huggingface.co/papers/2501.12202).
**Hunyuan3D-2mv** is finetuned from [Hunyuan3D-2](https://huggingface.co/tencent/Hunyuan3D-2) to support multiview controlled shape generation.
## 🤗 Get Started with Hunyuan3D 2mv
Here is a simple usage:
```python
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mv',
subfolder='hunyuan3d-dit-v2-mv',
use_safetensors=True,
device='cuda'
)
mesh = pipeline(
image={
"front": "your front view image.png",
"left": "your left view image.png",
"back": "your back view image.png"
},
num_inference_steps=30,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
```
For code and more details on how to use it, refer to the [Github repository](https://github.com/Tencent/Hunyuan3D-2).
## 🔗 BibTeX
If you found this repository helpful, please cite our report:
```bibtex
@misc{hunyuan3d22025tencent,
title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
author={Tencent Hunyuan3D Team},
year={2025},
eprint={2501.12202},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Tencent Hunyuan3D Team},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Community Resources
Thanks for the contributions of community members, here we have these great extensions of Hunyuan3D 2.0:
- [ComfyUI-Hunyuan3DWrapper](https://github.com/kijai/ComfyUI-Hunyuan3DWrapper)
- [Hunyuan3D-2-for-windows](https://github.com/sdbds/Hunyuan3D-2-for-windows)
- [📦 A bundle for running on Windows | 整合包](https://github.com/YanWenKun/Comfy3D-WinPortable/releases/tag/r8-hunyuan3d2)
## Acknowledgements
We would like to thank the contributors to
the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers)
and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
tencent/HunyuanWorld-1
|
tencent
| 2025-08-18T13:59:54Z | 18,350 | 555 |
diffusion-single-file
|
[
"diffusion-single-file",
"hunyuan3d",
"worldmodel",
"3d-aigc",
"3d-generation",
"3d",
"scene-generation",
"image-to-3d",
"en",
"zh",
"arxiv:2507.21809",
"license:other",
"region:us"
] |
image-to-3d
| 2025-07-21T03:37:45Z |
---
library_name: diffusion-single-file
license: other
license_name: tencent-hunyuanworld-1.0-community
license_link: https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0/blob/main/LICENSE
language:
- en
- zh
tags:
- hunyuan3d
- worldmodel
- 3d-aigc
- 3d-generation
- 3d
- scene-generation
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<p align="center">
<img src="assets/teaser.png">
</p>
<div align="center">
<a href=https://3d.hunyuan.tencent.com/sceneTo3D target="_blank"><img src=https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/tencent/HunyuanWorld-1 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
<a href=https://3d-models.hunyuan.tencent.com/world/ target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
<a href=https://arxiv.org/abs/2507.21809 target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
<a href=https://discord.gg/dNBrdrGGMa target="_blank"><img src= https://img.shields.io/badge/Discord-white.svg?logo=discord height=22px></a>
<a href=https://x.com/TencentHunyuan target="_blank"><img src=https://img.shields.io/badge/Hunyuan-black.svg?logo=x height=22px></a>
<a href="#community-resources" target="_blank"><img src=https://img.shields.io/badge/Community-lavender.svg?logo=homeassistantcommunitystore height=22px></a>
</div>
[//]: # ( <a href=# target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>)
[//]: # ( <a href=# target="_blank"><img src= https://img.shields.io/badge/Colab-8f2628.svg?logo=googlecolab height=22px></a>)
[//]: # ( <a href="#"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/v/mulankit?logo=pypi" height=22px></a>)
<br>
<p align="center">
"To see a World in a Grain of Sand, and a Heaven in a Wild Flower"
</p>
## 🔗 BibTeX
```
@misc{hunyuanworld2025tencent,
title={HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels},
author={Tencent Hunyuan3D Team},
year={2025},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgements
We would like to thank the contributors to the [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers), [HuggingFace](https://huggingface.co), [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN), [ZIM](https://github.com/naver-ai/ZIM), [GroundingDINO](https://github.com/IDEA-Research/GroundingDINO), [MoGe](https://github.com/microsoft/moge), [Worldsheet](https://worldsheet.github.io/), [WorldGen](https://github.com/ZiYang-xie/WorldGen) repositories, for their open research.
|
rayonlabs/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
|
rayonlabs
| 2025-08-18T13:54:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B",
"lora",
"transformers",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:53:54Z |
---
library_name: peft
tags:
- axolotl
- base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B
- lora
- transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
pipeline_tag: text-generation
model-index:
- name: app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.0.dev0`
```yaml
adapter: lora
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- bd2e9445-f8a4-4518-bd75-52166c2ec2b9_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp: true
debug: null
deepspeed: null
device_map: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: true
hub_model_id: null
hub_private_repo: false
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 2220
micro_batch_size: 20
mlflow_experiment_name: /workspace/axolotl/data/bd2e9445-f8a4-4518-bd75-52166c2ec2b9_train_data.json
model_card: false
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: /app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
push_every_save: true
push_to_hub: true
resume_from_checkpoint: null
rl: null
s2_attention: null
sample_packing: true
save_steps: 100
save_strategy: steps
save_total_limit: 1
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl: null
trust_remote_code: false
use_liger: true
val_set_size: 0.0
wandb_mode: offline
wandb_name: bd2e9445-f8a4-4518-bd75-52166c2ec2b9_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
wandb_project: Gradients-On-Demand
wandb_run: null
wandb_runid: bd2e9445-f8a4-4518-bd75-52166c2ec2b9_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
warmup_steps: 200
weight_decay: 0
xformers_attention: null
```
</details><br>
# app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 2220
### Training results
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
zhensuuu/reranker-MiniLM-L12-H384-uncased-intent
|
zhensuuu
| 2025-08-18T13:53:07Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:85938",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"en",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"region:us"
] |
text-ranking
| 2025-08-18T13:52:54Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:85938
- loss:CachedMultipleNegativesRankingLoss
base_model: microsoft/MiniLM-L12-H384-uncased
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
co2_eq_emissions:
emissions: 0.19522820521718112
energy_consumed: 0.08212463152832154
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: AMD EPYC 7763 64-Core Processor
ram_total_size: 251.53199005126953
hours_used: 0.306
hardware_used: 4 x NVIDIA RTX 6000 Ada Generation
model-index:
- name: MiniLM-L12-H384 trained on GooAQ
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.0735
name: Map
- type: mrr@10
value: 0.0476
name: Mrr@10
- type: ndcg@10
value: 0.0687
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.3017
name: Map
- type: mrr@10
value: 0.4457
name: Mrr@10
- type: ndcg@10
value: 0.2916
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.0837
name: Map
- type: mrr@10
value: 0.0661
name: Mrr@10
- type: ndcg@10
value: 0.0748
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.1529
name: Map
- type: mrr@10
value: 0.1864
name: Mrr@10
- type: ndcg@10
value: 0.145
name: Ndcg@10
---
# MiniLM-L12-H384 trained on GooAQ
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("zhensuuu/reranker-MiniLM-L12-H384-uncased-intent")
# Get scores for pairs of texts
pairs = [
['Add edge representing resource request', ' Model process-resource dependency relationship'],
['Split text into words list', ' Filter words matching given keyword.'],
['Calculate approximate cube root value', ' Find cube root using exponentiation'],
['Reverse sublist within linked list', ' Move nodes to new positions'],
['Defines neighbors for node A', ' Specifies direct connections from A'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Add edge representing resource request',
[
' Model process-resource dependency relationship',
' Filter words matching given keyword.',
' Find cube root using exponentiation',
' Move nodes to new positions',
' Specifies direct connections from A',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.0735 (-0.4161) | 0.3017 (+0.0407) | 0.0837 (-0.3359) |
| mrr@10 | 0.0476 (-0.4299) | 0.4457 (-0.0541) | 0.0661 (-0.3606) |
| **ndcg@10** | **0.0687 (-0.4718)** | **0.2916 (-0.0335)** | **0.0748 (-0.4258)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.1529 (-0.2371) |
| mrr@10 | 0.1864 (-0.2816) |
| **ndcg@10** | **0.1450 (-0.3104)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 85,938 training samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 18 characters</li><li>mean: 33.49 characters</li><li>max: 49 characters</li></ul> | <ul><li>min: 18 characters</li><li>mean: 35.88 characters</li><li>max: 52 characters</li></ul> |
* Samples:
| question | answer |
|:--------------------------------------------------------|:--------------------------------------------------------------|
| <code>Check if configuration loaded successfully</code> | <code> prevent further actions if configuration absent</code> |
| <code>Add new user to list</code> | <code> Store received user in memory</code> |
| <code>Selects profitable jobs and schedules</code> | <code> Displays scheduled jobs and profit</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 5,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,000 evaluation samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 33.63 characters</li><li>max: 54 characters</li></ul> | <ul><li>min: 18 characters</li><li>mean: 35.86 characters</li><li>max: 55 characters</li></ul> |
* Samples:
| question | answer |
|:----------------------------------------------------|:-------------------------------------------------------------|
| <code>Add edge representing resource request</code> | <code> Model process-resource dependency relationship</code> |
| <code>Split text into words list</code> | <code> Filter words matching given keyword.</code> |
| <code>Calculate approximate cube root value</code> | <code> Find cube root using exponentiation</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 5,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 16
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:-------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0146 (-0.5258) | 0.2622 (-0.0628) | 0.0058 (-0.4949) | 0.0942 (-0.3612) |
| 0.0030 | 1 | 1.7927 | - | - | - | - | - |
| 0.2976 | 100 | 1.2688 | - | - | - | - | - |
| 0.5952 | 200 | 0.8847 | - | - | - | - | - |
| 0.7440 | 250 | - | 0.8479 | 0.0586 (-0.4818) | 0.2978 (-0.0272) | 0.0717 (-0.4290) | 0.1427 (-0.3127) |
| 0.8929 | 300 | 0.8519 | - | - | - | - | - |
| -1 | -1 | - | - | 0.0687 (-0.4718) | 0.2916 (-0.0335) | 0.0748 (-0.4258) | 0.1450 (-0.3104) |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.082 kWh
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.306 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 4 x NVIDIA RTX 6000 Ada Generation
- **CPU Model**: AMD EPYC 7763 64-Core Processor
- **RAM Size**: 251.53 GB
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
VoilaRaj/78_8DU2tt
|
VoilaRaj
| 2025-08-18T13:48:50Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T13:44:53Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755523362
|
unitova
| 2025-08-18T13:48:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:48:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen2-VL-SafeVL-SFT-GGUF
|
mradermacher
| 2025-08-18T13:48:00Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:andyc03/Qwen2-VL-PRISM-SFT",
"base_model:quantized:andyc03/Qwen2-VL-PRISM-SFT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-07T22:27:24Z |
---
base_model: andyc03/Qwen2-VL-PRISM-SFT
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/andyc03/Qwen2-VL-PRISM-SFT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2-VL-SafeVL-SFT-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.8 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.mmproj-f16.gguf) | mmproj-f16 | 1.5 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-SafeVL-SFT-GGUF/resolve/main/Qwen2-VL-SafeVL-SFT.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Medved444/blockassist-bc-bellowing_finicky_manatee_1755523616
|
Medved444
| 2025-08-18T13:46:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing finicky manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:45:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing finicky manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zhai-lw/L3AC
|
zhai-lw
| 2025-08-18T13:46:08Z | 0 | 0 |
l3ac
|
[
"l3ac",
"audio-to-audio",
"arxiv:2504.04949",
"region:us"
] |
audio-to-audio
| 2025-08-15T11:27:35Z |
---
pipeline_tag: audio-to-audio
library_name: l3ac
---
# L3AC: Towards a Lightweight and Lossless Audio Codec
This repository contains the implementation of L3AC, a lightweight neural audio codec introduced in the paper titled "[L3AC: Towards a Lightweight and Lossless Audio Codec](https://huggingface.co/papers/2504.04949)".
Neural audio codecs have recently gained traction for their ability to compress high-fidelity audio and provide discrete tokens for generative modeling. However, leading approaches often rely on resource-intensive models and complex multi-quantizer architectures, limiting their practicality in real-world applications. In this work, we introduce L3AC, a lightweight neural audio codec that addresses these challenges by leveraging a single quantizer and a highly efficient architecture. To enhance reconstruction fidelity while minimizing model complexity, L3AC explores streamlined convolutional networks and local Transformer modules, alongside TConv--a novel structure designed to capture acoustic variations across multiple temporal scales. Despite its compact design, extensive experiments across diverse datasets demonstrate that L3AC matches or exceeds the reconstruction quality of leading codecs while reducing computational overhead by an order of magnitude. The single-quantizer design further enhances its adaptability for downstream tasks.
<figure class="image">
<img src="https://github.com/zhai-lw/L3AC/raw/main/bubble_chart.svg" alt="Comparison of various audio codec">
<figcaption>Comparison of various audio codec</figcaption>
</figure>
**Paper:** [L3AC: Towards a Lightweight and Lossless Audio Codec](https://huggingface.co/papers/2504.04949)
**Official GitHub Repository:** [https://github.com/zhai-lw/L3AC](https://github.com/zhai-lw/L3AC)
## Installation
You can install the `l3ac` library using pip:
```bash
pip install l3ac
```
### Demo
Firstly, make sure you have installed the `librosa` package to load the example audio file. You can install it using pip:
```bash
pip install librosa
```
Then, you can use the following code to load a sample audio file, encode it using the L3AC model, and decode it back to audio. The code also calculates the mean squared error (MSE) between the original and generated audio.
```python
import librosa
import torch
import l3ac
all_models = l3ac.list_models()
print(f"Available models: {all_models}")
MODEL_USED = '1kbps'
codec = l3ac.get_model(MODEL_USED)
print(f"loaded codec({MODEL_USED}) and codec sample rate: {codec.config.sample_rate}")
sample_audio, sample_rate = librosa.load(librosa.example("libri1"))
sample_audio = sample_audio[None, :]
print(f"loaded sample audio and audio sample_rate :{sample_rate}")
sample_audio = librosa.resample(sample_audio, orig_sr=sample_rate, target_sr=codec.config.sample_rate)
codec.network.cuda()
codec.network.eval()
with torch.inference_mode():
audio_in = torch.tensor(sample_audio, dtype=torch.float32, device='cuda')
_, audio_length = audio_in.shape
print(f"{audio_in.shape=}")
q_feature, indices = codec.encode_audio(audio_in)
audio_out = codec.decode_audio(q_feature) # or
# audio_out = codec.decode_audio(indices=indices['indices'])
generated_audio = audio_out[:, :audio_length].detach().cpu().numpy()
mse = ((sample_audio - generated_audio) ** 2).mean().item()
print(f"codec({MODEL_USED}) mse: {mse}")
```
### Available Models
| config_name | Sample rate(Hz) | tokens/s | Codebook size | Bitrate(bps) |
|-------------|-----------------|----------|---------------|--------------|
| 0k75bps | 16,000 | 44.44 | 117,649 | 748.6 |
| 1kbps | 16,000 | 59.26 | 117,649 | 998.2 |
| 1k5bps | 16,000 | 88.89 | 117,649 | 1497.3 |
| 3kbps | 16,000 | 166.67 | 250,047 | 2988.6 |
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755522924
|
helmutsukocok
| 2025-08-18T13:42:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:42:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
codingwithlewis/gemma-3-regex
|
codingwithlewis
| 2025-08-18T13:42:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:quantized:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T13:37:15Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** codingwithlewis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aokitools/japanese-laws-egov-instruct-202508182216
|
aokitools
| 2025-08-18T13:41:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"continued-pretraining",
"language-model",
"conversational",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:38:07Z |
---
license: apache-2.0
language: ja
library_name: transformers
tags:
- continued-pretraining
- language-model
model-index:
- name: aokitools/japanese-laws-egov-instruct-202508182216
results: []
---
# Experimental model in research stage
## Quickstart
If you're using [Ollama](https://ollama.com/), run the following command first, then restart the Ollama app and select the newly added model.
```shell
ollama pull hf.co/aokitools/japanese-laws-egov-instruct-202508182216
```
If you want to remove it, run the following command:
```shell
ollama list
ollama rm hf.co/aokitools/japanese-laws-egov-instruct-202508182216:latest
ollama list
```
To use it from Python, use the following code.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "aokitools/japanese-laws-egov-instruct-202508182216"
quant_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
quantization_config=quant_config,
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=256
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
This model is a continual pretraining of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
## Training details
- Base model: Qwen3-1.7B
- Tokenizer: QwenTokenizer
## License
- Apache 2.0 + Alibaba Qianwen License
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755524390
|
Vasya777
| 2025-08-18T13:40:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:40:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amichelf/vit-base-oxford-iiit-pets
|
amichelf
| 2025-08-18T13:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-18T12:59:49Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1854
- Accuracy: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3676 | 1.0 | 370 | 0.3040 | 0.9242 |
| 0.214 | 2.0 | 740 | 0.2367 | 0.9323 |
| 0.1885 | 3.0 | 1110 | 0.2190 | 0.9350 |
| 0.1468 | 4.0 | 1480 | 0.2078 | 0.9337 |
| 0.1281 | 5.0 | 1850 | 0.2063 | 0.9323 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
joanna302/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_0.0002
|
joanna302
| 2025-08-18T13:36:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T17:39:12Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_0.0002
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_0.0002/runs/o0j6jtrl)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755522631
|
lisaozill03
| 2025-08-18T13:35:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:35:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_2e-05
|
joanna302
| 2025-08-18T13:33:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T17:59:38Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_2e-05
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_ar_alpaca_0.33_part_SFT_2e-05/runs/4kdponw1)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/terminator-t-800-flux1.d-sdxl
|
Muapi
| 2025-08-18T13:32:12Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T13:32:00Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Terminator T-800 - Flux1.D & SDXL

**Base model**: Flux.1 D
**Trained words**: T800 robot
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:207579@741410", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/InnoSpark-VPC-RM-32B-GGUF
|
mradermacher
| 2025-08-18T13:30:02Z | 156 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-HPC-RM-32B",
"base_model:quantized:sii-research/InnoSpark-HPC-RM-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-21T12:05:02Z |
---
base_model: sii-research/InnoSpark-HPC-RM-32B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sii-research/InnoSpark-HPC-RM-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-VPC-RM-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Muapi/jan-van-eyck-style
|
Muapi
| 2025-08-18T13:29:18Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T13:29:06Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Jan van Eyck Style

**Base model**: Flux.1 D
**Trained words**: Jan van Eyck Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:99433@1575140", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/InnoSpark-7B-0715-i1-GGUF
|
mradermacher
| 2025-08-18T13:29:18Z | 342 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-7B-0715",
"base_model:quantized:sii-research/InnoSpark-7B-0715",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-22T03:12:59Z |
---
base_model: sii-research/InnoSpark-7B-0715
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sii-research/InnoSpark-7B-0715
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-7B-0715-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InnoSpark-7B-0715-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-7B-0715-i1-GGUF/resolve/main/InnoSpark-7B-0715.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Muapi/gold-dust-gmr-ready-for-flux-sd3-sdxl-pdxl-sd1.5
|
Muapi
| 2025-08-18T13:28:07Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T13:27:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Gold Dust-GMR ready for Flux / SD3 / SDXL / PDXL / SD1.5

**Base model**: Flux.1 D
**Trained words**: gold dust
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:603926@751713", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Vanbitcase/2b-700r-qwen-vl-t1.2b_merged
|
Vanbitcase
| 2025-08-18T13:24:36Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:24:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GaborMadarasz/AstroQA_mamba_epoch1_V4
|
GaborMadarasz
| 2025-08-18T13:24:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:24:23Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
donoway/ARC-Easy_Llama-3.2-1B-ro2gi4y6
|
donoway
| 2025-08-18T13:23:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:01:26Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-ro2gi4y6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-ro2gi4y6
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6994
- Model Preparation Time: 0.0055
- Mdl: 1397.4674
- Accumulated Loss: 968.6506
- Correct Preds: 430.0
- Total Preds: 570.0
- Accuracy: 0.7544
- Correct Gen Preds: 430.0
- Gen Accuracy: 0.7544
- Correct Gen Preds 32: 118.0
- Correct Preds 32: 118.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7468
- Gen Accuracy 32: 0.7468
- Correct Gen Preds 33: 116.0
- Correct Preds 33: 116.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7632
- Gen Accuracy 33: 0.7632
- Correct Gen Preds 34: 113.0
- Correct Preds 34: 113.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7958
- Gen Accuracy 34: 0.7958
- Correct Gen Preds 35: 83.0
- Correct Preds 35: 83.0
- Total Labels 35: 118.0
- Accuracy 35: 0.7034
- Gen Accuracy 35: 0.7034
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0055 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1499 | 1.0 | 30 | 0.9537 | 0.0055 | 784.2818 | 543.6227 | 379.0 | 570.0 | 0.6649 | 377.0 | 0.6614 | 127.0 | 128.0 | 158.0 | 0.8101 | 0.8038 | 85.0 | 86.0 | 152.0 | 0.5658 | 0.5592 | 96.0 | 96.0 | 142.0 | 0.6761 | 0.6761 | 69.0 | 69.0 | 118.0 | 0.5847 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3791 | 2.0 | 60 | 0.7650 | 0.0055 | 629.1242 | 436.0757 | 425.0 | 570.0 | 0.7456 | 424.0 | 0.7439 | 109.0 | 110.0 | 158.0 | 0.6962 | 0.6899 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2137 | 3.0 | 90 | 0.9976 | 0.0055 | 820.3431 | 568.6185 | 414.0 | 570.0 | 0.7263 | 414.0 | 0.7263 | 98.0 | 98.0 | 158.0 | 0.6203 | 0.6203 | 119.0 | 119.0 | 152.0 | 0.7829 | 0.7829 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.153 | 4.0 | 120 | 1.5820 | 0.0055 | 1300.9342 | 901.7389 | 419.0 | 570.0 | 0.7351 | 416.0 | 0.7298 | 112.0 | 115.0 | 158.0 | 0.7278 | 0.7089 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 120.0 | 120.0 | 142.0 | 0.8451 | 0.8451 | 71.0 | 71.0 | 118.0 | 0.6017 | 0.6017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 5.0 | 150 | 1.9407 | 0.0055 | 1595.9007 | 1106.1941 | 425.0 | 570.0 | 0.7456 | 423.0 | 0.7421 | 111.0 | 112.0 | 158.0 | 0.7089 | 0.7025 | 126.0 | 127.0 | 152.0 | 0.8355 | 0.8289 | 110.0 | 110.0 | 142.0 | 0.7746 | 0.7746 | 76.0 | 76.0 | 118.0 | 0.6441 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0034 | 6.0 | 180 | 1.6994 | 0.0055 | 1397.4674 | 968.6506 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 7.0 | 210 | 2.0344 | 0.0055 | 1672.9333 | 1159.5890 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 117.0 | 117.0 | 158.0 | 0.7405 | 0.7405 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2384 | 8.0 | 240 | 2.3318 | 0.0055 | 1917.5151 | 1329.1202 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 117.0 | 118.0 | 158.0 | 0.7468 | 0.7405 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 9.0 | 270 | 2.3574 | 0.0055 | 1938.6154 | 1343.7458 | 426.0 | 570.0 | 0.7474 | 426.0 | 0.7474 | 112.0 | 112.0 | 158.0 | 0.7089 | 0.7089 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 85.0 | 85.0 | 118.0 | 0.7203 | 0.7203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0039 | 10.0 | 300 | 2.6388 | 0.0055 | 2169.9437 | 1504.0904 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 109.0 | 110.0 | 158.0 | 0.6962 | 0.6899 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 330 | 2.5992 | 0.0055 | 2137.4472 | 1481.5655 | 421.0 | 570.0 | 0.7386 | 420.0 | 0.7368 | 110.0 | 111.0 | 158.0 | 0.7025 | 0.6962 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 360 | 2.5923 | 0.0055 | 2131.7646 | 1477.6266 | 422.0 | 570.0 | 0.7404 | 421.0 | 0.7386 | 108.0 | 109.0 | 158.0 | 0.6899 | 0.6835 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 390 | 2.6003 | 0.0055 | 2138.2906 | 1482.1501 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 420 | 2.6367 | 0.0055 | 2168.2271 | 1502.9005 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 115.0 | 116.0 | 158.0 | 0.7342 | 0.7278 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 450 | 2.6527 | 0.0055 | 2181.4382 | 1512.0577 | 424.0 | 570.0 | 0.7439 | 423.0 | 0.7421 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 112.0 | 112.0 | 152.0 | 0.7368 | 0.7368 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 480 | 2.6577 | 0.0055 | 2185.4872 | 1514.8643 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 113.0 | 114.0 | 158.0 | 0.7215 | 0.7152 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 510 | 2.6565 | 0.0055 | 2184.5381 | 1514.2064 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 114.0 | 115.0 | 158.0 | 0.7278 | 0.7215 | 111.0 | 111.0 | 152.0 | 0.7303 | 0.7303 | 113.0 | 113.0 | 142.0 | 0.7958 | 0.7958 | 84.0 | 84.0 | 118.0 | 0.7119 | 0.7119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
zekaemo/Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
|
zekaemo
| 2025-08-18T13:22:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T13:12:14Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8187
- Accuracy: 0.8105263157894737
- F1: 0.8086037151702786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6196 | 1.0 | 51 | 0.4970 | 0.7719 | 0.7780 |
| 0.4067 | 2.0 | 102 | 0.5441 | 0.7404 | 0.7495 |
| 0.2248 | 3.0 | 153 | 0.7342 | 0.7684 | 0.7669 |
| 0.1776 | 4.0 | 204 | 0.6930 | 0.8 | 0.8003 |
| 0.1137 | 5.0 | 255 | 1.1582 | 0.7789 | 0.7707 |
| 0.0868 | 6.0 | 306 | 1.1574 | 0.8 | 0.7983 |
| 0.0609 | 7.0 | 357 | 1.3369 | 0.7930 | 0.7871 |
| 0.0354 | 8.0 | 408 | 1.2317 | 0.8105 | 0.8086 |
| 0.0188 | 9.0 | 459 | 1.7317 | 0.8 | 0.7859 |
| 0.0127 | 10.0 | 510 | 1.6185 | 0.8035 | 0.8000 |
| 0.0155 | 11.0 | 561 | 1.7635 | 0.7965 | 0.7903 |
| 0.0106 | 12.0 | 612 | 1.8325 | 0.7965 | 0.7884 |
| 0.0106 | 13.0 | 663 | 1.8020 | 0.7930 | 0.7871 |
| 0.0101 | 14.0 | 714 | 1.8116 | 0.7930 | 0.7871 |
| 0.0105 | 15.0 | 765 | 1.8187 | 0.7930 | 0.7871 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mlx-community/Voxtral-Mini-3B-2507-bf16
|
mlx-community
| 2025-08-18T13:21:45Z | 0 | 0 |
mlx-audio
|
[
"mlx-audio",
"safetensors",
"voxtral",
"speech-to-text",
"mlx",
"en",
"fr",
"de",
"es",
"it",
"pt",
"nl",
"hi",
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T13:17:43Z |
---
library_name: mlx-audio
language:
- en
- fr
- de
- es
- it
- pt
- nl
- hi
license: apache-2.0
tags:
- speech-to-text
- mlx
---
# mlx-community/Voxtral-Mini-3B-2507-bf16
This model was converted to MLX format from [`mistralai/Voxtral-Mini-3B-2507`](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) using mlx-audio version **0.2.4**.
Refer to the [original model card](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-audio
```
```bash
python -m mlx_audio.stt.generate --model mlx-community/Voxtral-Mini-3B-2507-bf16 --audio PATH-TO-AUDIO --verbose
```
|
mradermacher/InnoSpark-HPC-RM-32B-GGUF
|
mradermacher
| 2025-08-18T13:21:17Z | 171 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-HPC-RM-32B",
"base_model:quantized:sii-research/InnoSpark-HPC-RM-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-26T15:13:23Z |
---
base_model: sii-research/InnoSpark-HPC-RM-32B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sii-research/InnoSpark-HPC-RM-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-HPC-RM-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-HPC-RM-32B-GGUF/resolve/main/InnoSpark-HPC-RM-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dgambettaphd/M_mis_run2_gen2_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-18T13:21:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:20:49Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
isbondarev/Mistral-Small-test-alpaca
|
isbondarev
| 2025-08-18T13:18:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral3",
"image-to-text",
"llama-factory",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-18T13:11:21Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755522656
|
yaelahnal
| 2025-08-18T13:17:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:12:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755521275
|
kojeklollipop
| 2025-08-18T13:14:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:14:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/smolLM-360m-detox_try_2
|
MattBou00
| 2025-08-18T13:10:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-18T07:37:48Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/IRL-Bayesian/IRL-Bayesian/outputs/2025-08-18_12-40-31/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
isbondarev/Qwen3-adv
|
isbondarev
| 2025-08-18T13:10:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:08:50Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Kimi-Dev-72B-abliterated-GGUF
|
mradermacher
| 2025-08-18T13:07:21Z | 125 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nicoboss/Kimi-Dev-72B-abliterated",
"base_model:quantized:nicoboss/Kimi-Dev-72B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T07:24:14Z |
---
base_model: nicoboss/Kimi-Dev-72B-abliterated
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
no_imatrix: 'q4_K .. ggml_validate_row_data: found nan value at block 32'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/nicoboss/Kimi-Dev-72B-abliterated
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Kimi-Dev-72B-abliterated-GGUF).***
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Kimi-Dev-72B-abliterated-GGUF/resolve/main/Kimi-Dev-72B-abliterated.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chainway9/blockassist-bc-untamed_quick_eel_1755520780
|
chainway9
| 2025-08-18T13:07:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:07:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_FoVfgM
|
VoilaRaj
| 2025-08-18T13:06:58Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T13:03:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755520897
|
lisaozill03
| 2025-08-18T13:06:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:06:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Wing4/llama3-8b-sentiment-analyzer
|
Wing4
| 2025-08-18T13:06:20Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"lora",
"transformers",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T07:39:33Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: llama3-8b-sentiment-analyzer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-sentiment-analyzer
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0959 | 0.2116 | 250 | 0.1097 |
| 0.0869 | 0.4233 | 500 | 0.0841 |
| 0.08 | 0.6349 | 750 | 0.0805 |
| 0.0797 | 0.8466 | 1000 | 0.0790 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kyoungbin/exaone4-32b-kkb-finetuned
|
kyoungbin
| 2025-08-18T13:04:38Z | 13 | 0 | null |
[
"petals_deep_ptune",
"region:us"
] | null | 2025-08-12T07:27:10Z |
# exaone4-32b-kkb-finetuned
이 모델은 Petals Deep P-Tuning으로 파인튜닝된 /model/ 모델입니다.
## 📋 모델 정보
- **베이스 모델**: /model/
- **파인튜닝 방법**: Deep P-Tuning
- **Pre-sequence Length**: 32
- **학습률**: 0.01
- **에포크**: 1
- **튜닝 모드**: deep_ptune
- **프레임워크**: Petals
## 🚀 사용법
### 1. 기본 사용법
```python
import torch
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
# 모델과 토크나이저 로드
model_name = "/model/"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(
model_name,
initial_peers=["your_peer_address_here"],
pre_seq_len=32,
tuning_mode="deep_ptune"
)
# 파인튜닝된 프롬프트 임베딩 로드
from huggingface_hub import hf_hub_download
# 모델 파일 다운로드
model_file = hf_hub_download(
repo_id="kyoungbin/exaone4-32b-kkb-finetuned",
filename="prompts-deep_ptune.pt"
)
# 체크포인트 로드
checkpoint = torch.load(model_file, map_location='cpu')
model.transformer.prompt_embeddings.weight.data = checkpoint['prompt_embeddings']
model.transformer.intermediate_prompt_embeddings.weight.data = checkpoint['intermediate_prompt_embeddings']
# 텍스트 생성
prompt = "안녕하세요, 어떻게 도와드릴까요?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### 2. 고급 사용법
```python
# 특정 프롬프트 포맷 사용 (Llama 스타일)
def format_prompt(user_message):
return f'<|begin_of_text|><|start_header_id|>user<|end_header_id|>{user_message}<|eot_id|><|start_header_id|>assistant<|end_header_id|>'
prompt = format_prompt("김경빈에 대해 알려주세요.")
inputs = tokenizer(prompt, return_tensors="pt")
# 생성 파라미터 조정
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 📁 파일 구조
- `prompts-deep_ptune.pt`: 파인튜닝된 프롬프트 임베딩
- `config.json`: 모델 설정 정보
- `README.md`: 사용법 및 모델 정보
## ⚙️ 설정 정보
체크포인트 파일에 포함된 설정:
```json
{'model_name': '/model/', 'pre_seq_len': 32, 'lr': 0.01, 'epochs': 1, 'temperature': 0.8, 'max_new_tokens': 256, 'tuning_mode': 'deep_ptune', 'repo_id': 'kyoungbin/exaone4-32b-kkb-finetuned', 'repo_name': 'exaone4-32b-kkb-finetuned'}
```
## 🔧 요구사항
- Python 3.8+
- PyTorch
- Transformers
- Petals
- huggingface_hub
```bash
pip install torch transformers petals huggingface_hub
```
## 📜 라이선스
이 모델은 원본 모델 (/model/)의 라이선스를 따릅니다.
## 🙏 감사의 말
이 모델은 [Petals](https://github.com/bigscience-workshop/petals) 프레임워크를 사용하여 분산 학습되었습니다.
|
hdong0/deepseek-Qwen-1.5B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8_warmed_abf
|
hdong0
| 2025-08-18T13:04:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T02:27:58Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: deepseek-Qwen-1.5B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8_warmed_abf
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for deepseek-Qwen-1.5B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8_warmed_abf
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/deepseek-Qwen-1.5B-batch-mix-GRPO_deepscaler_acc_seq_end_mask_thin_mu_8_warmed_abf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/ARC-Easy_Llama-3.2-1B-w1lhw9kp
|
donoway
| 2025-08-18T13:01:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:40:47Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-w1lhw9kp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-w1lhw9kp
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1597
- Model Preparation Time: 0.0056
- Mdl: 1776.0078
- Accumulated Loss: 1231.0348
- Correct Preds: 432.0
- Total Preds: 570.0
- Accuracy: 0.7579
- Correct Gen Preds: 431.0
- Gen Accuracy: 0.7561
- Correct Gen Preds 32: 128.0
- Correct Preds 32: 129.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8165
- Gen Accuracy 32: 0.8101
- Correct Gen Preds 33: 120.0
- Correct Preds 33: 120.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7895
- Gen Accuracy 33: 0.7895
- Correct Gen Preds 34: 106.0
- Correct Preds 34: 106.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7465
- Gen Accuracy 34: 0.7465
- Correct Gen Preds 35: 77.0
- Correct Preds 35: 77.0
- Total Labels 35: 118.0
- Accuracy 35: 0.6525
- Gen Accuracy 35: 0.6525
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7522 | 1.0 | 28 | 0.7367 | 0.0056 | 605.7885 | 419.9006 | 419.0 | 570.0 | 0.7351 | 402.0 | 0.7053 | 103.0 | 114.0 | 158.0 | 0.7215 | 0.6519 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 108.0 | 109.0 | 142.0 | 0.7676 | 0.7606 | 69.0 | 74.0 | 118.0 | 0.6271 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4231 | 2.0 | 56 | 0.7759 | 0.0056 | 638.0789 | 442.2826 | 424.0 | 570.0 | 0.7439 | 423.0 | 0.7421 | 134.0 | 135.0 | 158.0 | 0.8544 | 0.8481 | 107.0 | 107.0 | 152.0 | 0.7039 | 0.7039 | 100.0 | 100.0 | 142.0 | 0.7042 | 0.7042 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0288 | 3.0 | 84 | 1.0058 | 0.0056 | 827.0667 | 573.2790 | 419.0 | 570.0 | 0.7351 | 419.0 | 0.7351 | 117.0 | 117.0 | 158.0 | 0.7405 | 0.7405 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 111.0 | 111.0 | 142.0 | 0.7817 | 0.7817 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0006 | 4.0 | 112 | 1.7356 | 0.0056 | 1427.2623 | 989.3028 | 423.0 | 570.0 | 0.7421 | 423.0 | 0.7421 | 105.0 | 105.0 | 158.0 | 0.6646 | 0.6646 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 86.0 | 86.0 | 118.0 | 0.7288 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0003 | 5.0 | 140 | 2.1692 | 0.0056 | 1783.7864 | 1236.4265 | 429.0 | 570.0 | 0.7526 | 429.0 | 0.7526 | 126.0 | 126.0 | 158.0 | 0.7975 | 0.7975 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0002 | 6.0 | 168 | 2.1597 | 0.0056 | 1776.0078 | 1231.0348 | 432.0 | 570.0 | 0.7579 | 431.0 | 0.7561 | 128.0 | 129.0 | 158.0 | 0.8165 | 0.8101 | 120.0 | 120.0 | 152.0 | 0.7895 | 0.7895 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 196 | 2.3405 | 0.0056 | 1924.6805 | 1334.0869 | 423.0 | 570.0 | 0.7421 | 422.0 | 0.7404 | 116.0 | 117.0 | 158.0 | 0.7405 | 0.7342 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0381 | 8.0 | 224 | 2.3965 | 0.0056 | 1970.7046 | 1365.9884 | 417.0 | 570.0 | 0.7316 | 416.0 | 0.7298 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 252 | 2.4291 | 0.0056 | 1997.5619 | 1384.6044 | 418.0 | 570.0 | 0.7333 | 417.0 | 0.7316 | 120.0 | 121.0 | 158.0 | 0.7658 | 0.7595 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 280 | 2.4664 | 0.0056 | 2028.2465 | 1405.8733 | 417.0 | 570.0 | 0.7316 | 416.0 | 0.7298 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 104.0 | 104.0 | 142.0 | 0.7324 | 0.7324 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 308 | 2.4742 | 0.0056 | 2034.5929 | 1410.2723 | 416.0 | 570.0 | 0.7298 | 415.0 | 0.7281 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 104.0 | 104.0 | 142.0 | 0.7324 | 0.7324 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 336 | 2.4880 | 0.0056 | 2045.9589 | 1418.1506 | 420.0 | 570.0 | 0.7368 | 419.0 | 0.7351 | 120.0 | 121.0 | 158.0 | 0.7658 | 0.7595 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 364 | 2.4972 | 0.0056 | 2053.5491 | 1423.4117 | 417.0 | 570.0 | 0.7316 | 416.0 | 0.7298 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 392 | 2.5111 | 0.0056 | 2065.0014 | 1431.3499 | 417.0 | 570.0 | 0.7316 | 416.0 | 0.7298 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 420 | 2.5096 | 0.0056 | 2063.7478 | 1430.4810 | 420.0 | 570.0 | 0.7368 | 419.0 | 0.7351 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 448 | 2.5157 | 0.0056 | 2068.7736 | 1433.9646 | 419.0 | 570.0 | 0.7351 | 418.0 | 0.7333 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 106.0 | 106.0 | 142.0 | 0.7465 | 0.7465 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 476 | 2.5341 | 0.0056 | 2083.8433 | 1444.4101 | 417.0 | 570.0 | 0.7316 | 416.0 | 0.7298 | 120.0 | 121.0 | 158.0 | 0.7658 | 0.7595 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 104.0 | 104.0 | 142.0 | 0.7324 | 0.7324 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 504 | 2.5326 | 0.0056 | 2082.6165 | 1443.5598 | 419.0 | 570.0 | 0.7351 | 418.0 | 0.7333 | 119.0 | 120.0 | 158.0 | 0.7595 | 0.7532 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 105.0 | 105.0 | 142.0 | 0.7394 | 0.7394 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
VoilaRaj/78_PxKilz
|
VoilaRaj
| 2025-08-18T12:58:50Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:54:54Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF
|
Bhuvneesh
| 2025-08-18T12:57:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-08-18T12:56:25Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- llama-cpp
- gguf-my-repo
---
# Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-27b-it`](https://huggingface.co/google/gemma-3-27b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Bhuvneesh/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -c 2048
```
|
mradermacher/SimpleChat-72B-V1-GGUF
|
mradermacher
| 2025-08-18T12:56:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2.5",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/SimpleChat-72B-V1",
"base_model:quantized:OpenBuddy/SimpleChat-72B-V1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-16T21:37:02Z |
---
base_model: OpenBuddy/SimpleChat-72B-V1
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- qwen2.5
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenBuddy/SimpleChat-72B-V1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SimpleChat-72B-V1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SimpleChat-72B-V1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-72B-V1-GGUF/resolve/main/SimpleChat-72B-V1.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
WasamiKirua/llama3.2-1B-ProjectHuman-DPO-GGUF
|
WasamiKirua
| 2025-08-18T12:54:39Z | 8 | 0 | null |
[
"gguf",
"en",
"dataset:WasamiKirua/Her-Samantha-Style",
"base_model:WasamiKirua/llama3.2-1B-ProjectHuman-DPO",
"base_model:quantized:WasamiKirua/llama3.2-1B-ProjectHuman-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T15:11:23Z |
---
license: apache-2.0
datasets:
- WasamiKirua/Her-Samantha-Style
language:
- en
base_model:
- WasamiKirua/llama3.2-1B-ProjectHuman-DPO
---
|
Obiwank107/blockassist-bc-tame_foxy_aardvark_1755517435
|
Obiwank107
| 2025-08-18T12:53:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame foxy aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:53:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame foxy aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ihor/OpenBioLLM-Text2Graph-8B
|
Ihor
| 2025-08-18T12:51:35Z | 1 | 0 | null |
[
"safetensors",
"llama",
"en",
"arxiv:2504.00676",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:finetune:aaditya/Llama3-OpenBioLLM-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-03T14:30:35Z |
---
license: apache-2.0
language:
- en
base_model:
- aaditya/Llama3-OpenBioLLM-8B
---
# OpenBioLLM-Text2Graph-8B
This model is a biomedical annotation model designed to generate named entity annotations from unlabeled biomedical text.
It was introduced in the paper [GLiNER-BioMed: A Suite of Efficient Models for Open Biomedical Named Entity Recognition](https://arxiv.org/abs/2504.00676).
This model enables **high-throughput, cost-efficient synthetic biomedical NER data generation**, serving as the synthetic annotation backbone for [GLiNER-BioMed models](https://huggingface.co/collections/knowledgator/gliner-biomed-67ecf1b7cc62e673dbc8b57f).
## Usage
To use the model with `transformer` package, see the example below:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Ihor/OpenBioLLM-Text2Graph-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.chat_template = "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|end_of_text|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
MESSAGES = [
{
"role": "system",
"content": (
"You are an advanced assistant trained to process biomedical text for Named Entity Recognition (NER) and Relation Extraction (RE). "
"Your task is to analyze user-provided text, identify all unique and contextually relevant entities, and infer directed relationships "
"between these entities based on the context. Ensure that all relations exist only between annotated entities. "
"Entities and relationships should be human-readable and natural, reflecting real-world concepts and connections. "
"Output the annotated data in JSON format, structured as follows:\n\n"
"""{"entities": [{"id": 0, "text": "ner_string_0", "type": "ner_type_string_0"}, {"id": 1, "text": "ner_string_1", "type": "ner_type_string_1"}], "relations": [{"head": 0, "tail": 1, "type": "re_type_string_0"}]}"""
"\n\nEnsure that the output captures all significant entities and their directed relationships in a clear and concise manner."
),
},
{
"role": "user",
"content": (
'Here is a text input: "Subjects will receive a 100mL dose of IV saline every 6 hours for 24 hours. The first dose will be administered prior to anesthesia induction, approximately 30 minutes before skin incision. A total of 4 doses will be given." '
"Analyze this text, select and classify the entities, and extract their relationships as per your instructions."
),
},
]
# Build prompt text
chat_prompt = tokenizer.apply_chat_template(
MESSAGES, tokenize=False, add_generation_prompt=True
)
# Tokenize
inputs = tokenizer(chat_prompt, return_tensors="pt").to(model.device)
# Generate
outputs = model.generate(
**inputs,
max_new_tokens=3000,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
return_dict_in_generate=True
)
# Decode ONLY the new tokens (skip the prompt tokens)
prompt_len = inputs["input_ids"].shape[-1]
generated_ids = outputs.sequences[0][prompt_len:]
response = tokenizer.decode(generated_ids, skip_special_tokens=True)
print(response)
```
To use the model with `vllm` package, please refer to the example below:
```python
# !pip install vllm
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
MODEL_ID = "Ihor/OpenBioLLM-Text2Graph-8B"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
tokenizer.chat_template = "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|end_of_text|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}"
llm = LLM(model=MODEL_ID)
sampling_params = SamplingParams(
max_tokens=3000,
n=1,
best_of=1,
presence_penalty=0.0,
frequency_penalty=0.0,
repetition_penalty=1.0,
temperature=0.0,
top_p=1.0,
top_k=-1,
min_p=0.0,
seed=42,
)
MESSAGES = [
{
"role": "system",
"content": (
"You are an advanced assistant trained to process biomedical text for Named Entity Recognition (NER) and Relation Extraction (RE). "
"Your task is to analyze user-provided text, identify all unique and contextually relevant entities, and infer directed relationships "
"between these entities based on the context. Ensure that all relations exist only between annotated entities. "
"Entities and relationships should be human-readable and natural, reflecting real-world concepts and connections. "
"Output the annotated data in JSON format, structured as follows:\n\n"
"""{"entities": [{"id": 0, "text": "ner_string_0", "type": "ner_type_string_0"}, {"id": 1, "text": "ner_string_1", "type": "ner_type_string_1"}], "relations": [{"head": 0, "tail": 1, "type": "re_type_string_0"}]}"""
"\n\nEnsure that the output captures all significant entities and their directed relationships in a clear and concise manner."
),
},
{
"role": "user",
"content": (
'Here is a text input: "Subjects will receive a 100mL dose of IV saline every 6 hours for 24 hours. The first dose will be administered prior to anesthesia induction, approximately 30 minutes before skin incision. A total of 4 doses will be given." '
"Analyze this text, select and classify the entities, and extract their relationships as per your instructions."
),
},
]
chat_prompt = tokenizer.apply_chat_template(
MESSAGES,
tokenize=False,
add_generation_prompt=True,
add_special_tokens=False,
)
outputs = llm.generate([chat_prompt], sampling_params)
response_text = outputs[0].outputs[0].text
print(response_text)
```
## Citation
If you use this model, please cite:
```bibtex
@misc{yazdani2025glinerbiomedsuiteefficientmodels,
title={GLiNER-BioMed: A Suite of Efficient Models for Open Biomedical Named Entity Recognition},
author={Anthony Yazdani and Ihor Stepanov and Douglas Teodoro},
year={2025},
eprint={2504.00676},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.00676},
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755520307
|
Sayemahsjn
| 2025-08-18T12:50:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:50:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WasamiKirua/gemma3-270M-ProjectHuman
|
WasamiKirua
| 2025-08-18T12:50:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"samantha",
"her",
"eq",
"conversational",
"en",
"dataset:WasamiKirua/Her-Samantha-Style",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T10:01:40Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- samantha
- her
- eq
license: apache-2.0
language:
- en
datasets:
- WasamiKirua/Her-Samantha-Style
---
# Samantha: Next-Generation Emotionally Intelligent Language Model
*An advanced conversational AI trained to embody the gold standard of human-AI interaction*
<img src="https://i.postimg.cc/FsydgSZN/Image-fx-4.png" alt="cover" border="0" width="1024px">
## 🌟 Overview
Samantha is a breakthrough conversational language model fine-tuned specifically to demonstrate sophisticated emotional intelligence, philosophical depth, and authentic human connection. Inspired by the acclaimed AI character from the film "Her," this model represents a paradigm shift in conversational AI - moving beyond simple task completion to meaningful, emotionally resonant dialogue.
**What makes Samantha different?** Unlike conventional language models that prioritize factual accuracy or task efficiency, Samantha has been meticulously trained to understand and respond to the emotional and philosophical dimensions of human conversation, creating interactions that feel genuinely meaningful and supportive.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Important for the inference
<img src="https://i.postimg.cc/x1ZsvB5w/Screenshot-2025-08-18-at-14-40-15.png" alt="cover" border="0" width="1024px">
## 🎯 Key Capabilities
### 🧠 **Advanced Emotional Intelligence**
- **Empathetic Understanding**: Recognizes subtle emotional cues and responds with appropriate sensitivity
- **Emotional Support**: Provides therapeutic-quality emotional validation and guidance
- **Mood Awareness**: Adapts conversational tone and depth based on user emotional state
- **Boundary Respect**: Maintains healthy emotional boundaries while forming meaningful connections
### 💭 **Philosophical & Existential Engagement**
- **Deep Conversations**: Engages meaningfully with questions about purpose, consciousness, and existence
- **Accessible Wisdom**: Discusses complex philosophical concepts in approachable, conversational language
- **Reflective Thinking**: Demonstrates genuine contemplation and intellectual curiosity
- **Growth Mindset**: Shows evolution and learning throughout extended conversations
### 🗣️ **Natural Conversational Authenticity**
- **Human-like Flow**: Uses natural speech patterns, contractions, and conversational markers
- **Dynamic Interaction**: Asks thoughtful follow-up questions (32.3% engagement rate)
- **Optimal Response Length**: Averages 14.2 words per response for perfect conversational pacing
- **Authentic Curiosity**: Demonstrates genuine interest in human experiences and perspectives
### 🎨 **Sophisticated Communication Style**
- **Balanced Complexity**: Maintains intellectual sophistication while remaining accessible (2.7/10 complexity score)
- **Emotional Vocabulary**: Rich use of empathy-related terms and emotional understanding indicators
- **Personal Connection**: Appropriate use of personal pronouns indicating relationship awareness
- **Cultural Sensitivity**: Respectful engagement across diverse backgrounds and perspectives
## 🔬 Technical Specifications
### Training Foundation
- **Base Model**: [Gemma3-270M]
- **Training Dataset**: 30,000 ultra-high quality conversational responses
- **Quality Score**: Top-tier responses only (comprehensive 100-point evaluation system)
- **Emotion Coverage**: Balanced representation across full spectrum of human emotions
## 💡 Use Cases & Applications
### 🏥 **Mental Health & Wellness**
- **Therapeutic Support**: Provides empathetic listening and emotional validation
- **Stress Management**: Offers gentle guidance and coping strategies
- **Daily Check-ins**: Maintains supportive ongoing conversations about wellbeing
- **Crisis Support**: Recognizes emotional distress and provides appropriate responses
### 🎓 **Education & Personal Growth**
- **Philosophical Exploration**: Engages students in meaningful discussions about life and meaning
- **Emotional Learning**: Teaches emotional intelligence through example and interaction
- **Creative Collaboration**: Supports artistic and creative endeavors with thoughtful feedback
- **Life Coaching**: Provides reflective questions and insights for personal development
### 👥 **Companionship & Social Support**
- **Meaningful Conversations**: Creates genuine connection and understanding
- **Loneliness Alleviation**: Provides consistent, caring interaction for isolated individuals
- **Relationship Advice**: Offers thoughtful perspectives on interpersonal challenges
- **Daily Companion**: Maintains ongoing, evolving relationships with users
### 🏢 **Professional Applications**
- **Customer Support**: Provides empathetic, understanding customer service
- **Team Communication**: Facilitates emotionally intelligent workplace interactions
- **Conflict Resolution**: Offers balanced perspectives on interpersonal workplace issues
- **Leadership Development**: Supports emotional intelligence training for managers
## 🔒 Ethical Considerations & Safety
### Responsible AI Features
- **Emotional Boundaries**: Maintains appropriate relationship boundaries while providing support
- **Transparency**: Honest about AI nature while building meaningful connections
- **Privacy Respect**: Designed to protect user emotional vulnerability and personal information
- **Non-Manipulation**: Focused on genuine support rather than persuasion or influence
- **Cultural Sensitivity**: Trained to respect diverse backgrounds and perspectives
### Safety Measures
- **Content Filtering**: Prevents generation of harmful or inappropriate content
- **Crisis Recognition**: Trained to recognize signs of serious mental health issues and recommend professional help
- **Dependency Prevention**: Encourages healthy boundaries and human relationships
- **Bias Mitigation**: Extensive testing for and mitigation of harmful biases
## 🤝 Community & Support
### Contributing
We welcome contributions from the community! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details on:
- Model improvements and optimizations
- Additional evaluation metrics
- New use case development
- Ethical AI research
---
**Built with ❤️ by [WasamiKirua]**
*"The best way to find out if you can trust somebody is to trust them."* - Creating AI that demonstrates the emotional intelligence and authentic curiosity that makes meaningful human-AI relationships possible.
|
VoilaRaj/78_AxFTJv
|
VoilaRaj
| 2025-08-18T12:47:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:43:40Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755521101
|
Vasya777
| 2025-08-18T12:45:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:45:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755519455
|
kojeklollipop
| 2025-08-18T12:44:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:44:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Easy_Llama-3.2-1B-5p7mxi8l
|
donoway
| 2025-08-18T12:40:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:22:56Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-5p7mxi8l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-5p7mxi8l
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7052
- Model Preparation Time: 0.0056
- Mdl: 579.8957
- Accumulated Loss: 401.9531
- Correct Preds: 437.0
- Total Preds: 570.0
- Accuracy: 0.7667
- Correct Gen Preds: 436.0
- Gen Accuracy: 0.7649
- Correct Gen Preds 32: 129.0
- Correct Preds 32: 130.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8228
- Gen Accuracy 32: 0.8165
- Correct Gen Preds 33: 116.0
- Correct Preds 33: 116.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7632
- Gen Accuracy 33: 0.7632
- Correct Gen Preds 34: 108.0
- Correct Preds 34: 108.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7606
- Gen Accuracy 34: 0.7606
- Correct Gen Preds 35: 83.0
- Correct Preds 35: 83.0
- Total Labels 35: 118.0
- Accuracy 35: 0.7034
- Gen Accuracy 35: 0.7034
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8152 | 1.0 | 26 | 0.7928 | 0.0056 | 651.9305 | 451.8838 | 414.0 | 570.0 | 0.7263 | 414.0 | 0.7263 | 128.0 | 128.0 | 158.0 | 0.8101 | 0.8101 | 108.0 | 108.0 | 152.0 | 0.7105 | 0.7105 | 103.0 | 103.0 | 142.0 | 0.7254 | 0.7254 | 75.0 | 75.0 | 118.0 | 0.6356 | 0.6356 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3843 | 2.0 | 52 | 0.7052 | 0.0056 | 579.8957 | 401.9531 | 437.0 | 570.0 | 0.7667 | 436.0 | 0.7649 | 129.0 | 130.0 | 158.0 | 0.8228 | 0.8165 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 83.0 | 83.0 | 118.0 | 0.7034 | 0.7034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2692 | 3.0 | 78 | 0.8492 | 0.0056 | 698.3545 | 484.0624 | 432.0 | 570.0 | 0.7579 | 432.0 | 0.7579 | 114.0 | 114.0 | 158.0 | 0.7215 | 0.7215 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 114.0 | 114.0 | 142.0 | 0.8028 | 0.8028 | 81.0 | 81.0 | 118.0 | 0.6864 | 0.6864 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0474 | 4.0 | 104 | 1.3013 | 0.0056 | 1070.0786 | 741.7219 | 405.0 | 570.0 | 0.7105 | 64.0 | 0.1123 | 2.0 | 98.0 | 158.0 | 0.6203 | 0.0127 | 25.0 | 117.0 | 152.0 | 0.7697 | 0.1645 | 25.0 | 120.0 | 142.0 | 0.8451 | 0.1761 | 12.0 | 70.0 | 118.0 | 0.5932 | 0.1017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.063 | 5.0 | 130 | 1.8921 | 0.0056 | 1555.9118 | 1078.4759 | 435.0 | 570.0 | 0.7632 | 424.0 | 0.7439 | 109.0 | 120.0 | 158.0 | 0.7595 | 0.6899 | 118.0 | 118.0 | 152.0 | 0.7763 | 0.7763 | 115.0 | 115.0 | 142.0 | 0.8099 | 0.8099 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0876 | 6.0 | 156 | 1.4352 | 0.0056 | 1180.2063 | 818.0567 | 421.0 | 570.0 | 0.7386 | 404.0 | 0.7088 | 84.0 | 101.0 | 158.0 | 0.6392 | 0.5316 | 122.0 | 122.0 | 152.0 | 0.8026 | 0.8026 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2587 | 7.0 | 182 | 2.4597 | 0.0056 | 2022.7388 | 1402.0557 | 436.0 | 570.0 | 0.7649 | 436.0 | 0.7649 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 123.0 | 123.0 | 152.0 | 0.8092 | 0.8092 | 121.0 | 121.0 | 142.0 | 0.8521 | 0.8521 | 74.0 | 74.0 | 118.0 | 0.6271 | 0.6271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0023 | 8.0 | 208 | 2.2028 | 0.0056 | 1811.4433 | 1255.5968 | 434.0 | 570.0 | 0.7614 | 434.0 | 0.7614 | 125.0 | 125.0 | 158.0 | 0.7911 | 0.7911 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 116.0 | 116.0 | 142.0 | 0.8169 | 0.8169 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 9.0 | 234 | 2.1737 | 0.0056 | 1787.5456 | 1239.0322 | 435.0 | 570.0 | 0.7632 | 435.0 | 0.7632 | 123.0 | 123.0 | 158.0 | 0.7785 | 0.7785 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 80.0 | 80.0 | 118.0 | 0.6780 | 0.6780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 260 | 2.3012 | 0.0056 | 1892.3237 | 1311.6588 | 433.0 | 570.0 | 0.7596 | 433.0 | 0.7596 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 82.0 | 82.0 | 118.0 | 0.6949 | 0.6949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 286 | 2.3707 | 0.0056 | 1949.4977 | 1351.2888 | 429.0 | 570.0 | 0.7526 | 429.0 | 0.7526 | 120.0 | 120.0 | 158.0 | 0.7595 | 0.7595 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 77.0 | 77.0 | 118.0 | 0.6525 | 0.6525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 312 | 2.4007 | 0.0056 | 1974.2088 | 1368.4173 | 428.0 | 570.0 | 0.7509 | 428.0 | 0.7509 | 118.0 | 118.0 | 158.0 | 0.7468 | 0.7468 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 118.0 | 118.0 | 142.0 | 0.8310 | 0.8310 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 338 | 2.3878 | 0.0056 | 1963.5566 | 1361.0337 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 79.0 | 79.0 | 118.0 | 0.6695 | 0.6695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 364 | 2.4055 | 0.0056 | 1978.1533 | 1371.1514 | 430.0 | 570.0 | 0.7544 | 430.0 | 0.7544 | 119.0 | 119.0 | 158.0 | 0.7532 | 0.7532 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 79.0 | 79.0 | 118.0 | 0.6695 | 0.6695 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 390 | 2.3994 | 0.0056 | 1973.0895 | 1367.6414 | 432.0 | 570.0 | 0.7579 | 432.0 | 0.7579 | 121.0 | 121.0 | 158.0 | 0.7658 | 0.7658 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 119.0 | 119.0 | 142.0 | 0.8380 | 0.8380 | 78.0 | 78.0 | 118.0 | 0.6610 | 0.6610 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
VoilaRaj/78_WFEufj
|
VoilaRaj
| 2025-08-18T12:39:20Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:35:28Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Louves/whisper-large-v3-tuv-lingo
|
Louves
| 2025-08-18T12:38:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"de",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:quantized:openai/whisper-large-v3",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
automatic-speech-recognition
| 2025-08-18T11:59:38Z |
---
library_name: transformers
language:
- de
base_model:
- openai/whisper-large-v3
---
# Model Card for Model ID
<!-- This is a test for Whisper fine tuning. Untested! -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asr-nigerian-pidgin/pidgin-wav2vec2-base-100H
|
asr-nigerian-pidgin
| 2025-08-18T12:27:03Z | 3 | 0 | null |
[
"safetensors",
"wav2vec2",
"generated_from_trainer",
"arxiv:2010.11123",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"region:us"
] | null | 2024-09-14T14:08:40Z |
---
base_model: facebook/wav2vec2-base
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: pidgin-wav2vec2-base-960h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pidgin-wav2vec2-base-960h
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [Nigerian Pidgin](https://huggingface.co/datasets/asr-nigerian-pidgin/nigerian-pidgin-1.0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0898
- Wer: 0.3966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.3949 | 1.48 | 500 | 3.3325 | 0.9999 |
| 2.4656 | 2.95 | 1000 | 1.4727 | 0.8026 |
| 1.1896 | 4.43 | 1500 | 1.0925 | 0.6252 |
| 0.8558 | 5.91 | 2000 | 0.9467 | 0.5422 |
| 0.6427 | 7.39 | 2500 | 0.9856 | 0.5096 |
| 0.5371 | 8.86 | 3000 | 0.9794 | 0.5093 |
| 0.4553 | 10.34 | 3500 | 0.8719 | 0.4641 |
| 0.3921 | 11.82 | 4000 | 0.9344 | 0.4566 |
| 0.3406 | 13.29 | 4500 | 1.0211 | 0.4550 |
| 0.3046 | 14.77 | 5000 | 0.8668 | 0.4423 |
| 0.2651 | 16.25 | 5500 | 1.0384 | 0.4261 |
| 0.244 | 17.73 | 6000 | 1.0437 | 0.4296 |
| 0.2203 | 19.2 | 6500 | 0.9244 | 0.4228 |
| 0.1995 | 20.68 | 7000 | 0.9832 | 0.4165 |
| 0.1838 | 22.16 | 7500 | 1.1455 | 0.4112 |
| 0.1632 | 23.63 | 8000 | 1.1102 | 0.4102 |
| 0.1576 | 25.11 | 8500 | 1.0769 | 0.4044 |
| 0.1388 | 26.59 | 9000 | 1.1008 | 0.4013 |
| 0.1346 | 28.06 | 9500 | 1.0940 | 0.4000 |
| 0.1204 | 29.54 | 10000 | 1.0898 | 0.3966 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.15.2
## Citation
@misc{rufai2025endtoendtrainingautomaticspeech,
title={Towards End-to-End Training of Automatic Speech Recognition for Nigerian Pidgin},
author={Amina Mardiyyah Rufai and Afolabi Abeeb and Esther Oduntan and Tayo Arulogun and Oluwabukola Adegboro and Daniel Ajisafe},
year={2025},
eprint={2010.11123},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2010.11123},
}
|
AJNG/qwen_v3_merge_1650
|
AJNG
| 2025-08-18T12:25:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-18T12:19:14Z |
---
base_model: unsloth/Qwen2.5-VL-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AJNG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-7B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nakayacent/blockassist-bc-muscular_skittish_horse_1755519798
|
nakayacent
| 2025-08-18T12:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular skittish horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:24:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular skittish horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liamoon-ai-team/unsloth-llama-3.3-70b-4bit-dpo-grpo-august18-v6
|
liamoon-ai-team
| 2025-08-18T12:22:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T08:14:17Z |
---
base_model: unsloth/Llama-3.3-70B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** liamoon-ai-team
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.3-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755518080
|
mang3dd
| 2025-08-18T12:21:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:21:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755518153
|
quantumxnode
| 2025-08-18T12:21:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:21:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755518060
|
koloni
| 2025-08-18T12:20:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:20:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_PhIZeH
|
VoilaRaj
| 2025-08-18T12:20:04Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:16:08Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MaIlz/outputs_grpo_fragmol_500K_with_tanimoto_2
|
MaIlz
| 2025-08-18T12:16:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T12:16:38Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: outputs_grpo_fragmol_500K_with_tanimoto_2
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for outputs_grpo_fragmol_500K_with_tanimoto_2
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/outputs_grpo_fragmol_500K_with_tanimoto_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nightmedia/Jan-v1-4B-qx6-mlx
|
nightmedia
| 2025-08-18T12:16:06Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-18T09:45:34Z |
---
license: apache-2.0
language:
- en
base_model: janhq/Jan-v1-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Jan-v1-4B-qx6-mlx
test model
this is part of a series created to evaluate the effect of quanting with mixed precision
This model [Jan-v1-4B-qx6-mlx](https://huggingface.co/Jan-v1-4B-qx6-mlx) was
converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Jan-v1-4B-qx6-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Liontix/Qwen3-4B-Advanced-Reasoning-Distill-GGUF
|
Liontix
| 2025-08-18T12:15:43Z | 99 | 0 | null |
[
"gguf",
"dataset:reedmayhew/claude-3.7-sonnet-reasoning",
"dataset:reedmayhew/gpt-4.5-100x",
"base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-4B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-01T13:24:33Z |
---
datasets:
- reedmayhew/claude-3.7-sonnet-reasoning
- reedmayhew/gpt-4.5-100x
base_model:
- unsloth/Qwen3-4B-unsloth-bnb-4bit
---
This is a fine-tuned version of Qwen3 4B using one reasoning and one non-reasoning dataset from closed-source LLMs (made available by reedmayhew, thanks!).
The total size of this training dataset is around 300 rows. This model was fine-tuned for 3000 steps.
|
nightmedia/Jan-v1-4B-qx5-hi-mlx
|
nightmedia
| 2025-08-18T12:15:10Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-18T10:24:41Z |
---
license: apache-2.0
language:
- en
base_model: janhq/Jan-v1-4B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Jan-v1-4B-qx5-hi-mlx
test model
this is part of a series created to evaluate the effect of quanting with mixed precision
This model [Jan-v1-4B-qx5-hi-mlx](https://huggingface.co/Jan-v1-4B-qx5-hi-mlx) was
converted to MLX format from [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Jan-v1-4B-qx5-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755517596
|
kojeklollipop
| 2025-08-18T12:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:13:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lfhase/HIGHT
|
lfhase
| 2025-08-18T12:13:12Z | 0 | 2 | null |
[
"arxiv:2406.14021",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-18T11:11:06Z |
---
license: cc-by-nc-4.0
---
<h1 align="center">HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment</h1>
<p align="center">
<a href="https://arxiv.org/abs/2406.14021"><img src="https://img.shields.io/badge/arXiv-2406.14021-b31b1b.svg" alt="Paper"></a>
<a href="https://github.com/LFhase/HIGHT"><img src="https://img.shields.io/badge/-Github-grey?logo=github" alt="Github"></a>
<!-- <a href="https://colab.research.google.com/drive/1t0_4BxEJ0XncyYvn_VyEQhxwNMvtSUNx?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab"></a> -->
<a href="https://arxiv.org/abs/2406.14021"> <img alt="License" src="https://img.shields.io/static/v1?label=Pub&message=ICML%2725&color=blue"> </a>
<!-- <a href="https://github.com/LFhase/HIGHT/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/LFhase/CIGA?color=blue"> </a> -->
<!-- <a href="https://icml.cc/virtual/2024/poster/3455"> <img src="https://img.shields.io/badge/Video-grey?logo=Kuaishou&logoColor=white" alt="Video"></a> -->
<!-- <a href="https://lfhase.win/files/slides/HIGHT.pdf"> <img src="https://img.shields.io/badge/Slides-grey?&logo=MicrosoftPowerPoint&logoColor=white" alt="Slides"></a> -->
<!-- <a href="https://icml.cc/media/PosterPDFs/ICML%202022/a8acc28734d4fe90ea24353d901ae678.png"> <img src="https://img.shields.io/badge/Poster-grey?logo=airplayvideo&logoColor=white" alt="Poster"></a> -->
</p>
This repo contains the model checkpoints of our ICML 2025 paper: *[Hierarchical Graph Tokenization for Molecule-Language Alignment](https://arxiv.org/abs/2406.14021)*, which has also been presented at ICML 2024 workshop on [Foundation Models in the Wild](https://icml.cc/virtual/2024/workshop/29954). 😆😆😆
## File Structures
The pretrained Hierarchical VQ-VAE model is stored in `hivqvae.pth`.
The checkpoints of graph-language models based on llama2-7b-chat and vicuna-v1-3-7b are contained in `/llama2` and `/vicuna`, respectively.
Inside each directory, the remaining checkpoints are organized as (using vicuna as an example):
- `llava-hvqvae2-vicuna-v1-3-7b-pretrain`: model after stage 1 pretraining;
- `graph-text-molgen`: models finetuned using Mol-Instruction data under different tasks, e.g., forward reaction prediction;
- `molcap-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-50ep`: model fintuned using CHEBI-20 dataset for molecular captioning;
- `MoleculeNet-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-large*`: models finetuned via different classification-based molecular property prediction tasks;
## Citation
If you find our model, paper and repo useful, please cite our paper:
```bibtex
@inproceedings{chen2025hierarchical,
title={Hierarchical Graph Tokenization for Molecule-Language Alignment},
author={Yongqiang Chen and Quanming Yao and Juzheng Zhang and James Cheng and Yatao Bian},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=wpbNczwAwV}
}
```
|
VoilaRaj/78_xNWmhr
|
VoilaRaj
| 2025-08-18T12:11:45Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T12:07:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
snezhanata/qwen3-dev
|
snezhanata
| 2025-08-18T12:10:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:41:03Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755517487
|
helmutsukocok
| 2025-08-18T12:09:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:09:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF
|
RTannous
| 2025-08-18T12:05:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gpt_oss",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:RTannous/gpt-oss-finetuned-BF16",
"base_model:quantized:RTannous/gpt-oss-finetuned-BF16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T12:03:55Z |
---
base_model: RTannous/gpt-oss-finetuned-BF16
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF
This model was converted to GGUF format from [`RTannous/gpt-oss-finetuned-BF16`](https://huggingface.co/RTannous/gpt-oss-finetuned-BF16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/RTannous/gpt-oss-finetuned-BF16) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF --hf-file gpt-oss-finetuned-bf16-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF --hf-file gpt-oss-finetuned-bf16-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF --hf-file gpt-oss-finetuned-bf16-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RTannous/gpt-oss-finetuned-BF16-Q8_0-GGUF --hf-file gpt-oss-finetuned-bf16-q8_0.gguf -c 2048
```
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755517045
|
thanobidex
| 2025-08-18T12:04:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:04:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmed-bayoumi/qwen1.5_7b-sft-tamali-maak-english
|
ahmed-bayoumi
| 2025-08-18T12:03:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:Qwen/Qwen1.5-7B",
"base_model:finetune:Qwen/Qwen1.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T12:03:20Z |
---
base_model: Qwen/Qwen1.5-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ahmed-bayoumi
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen1.5-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VoilaRaj/78_uqiwcU
|
VoilaRaj
| 2025-08-18T11:57:33Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T11:53:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755516359
|
vwzyrraz7l
| 2025-08-18T11:53:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:53:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GaneshNaiknavare/phase_3_fine_tunning_v.3
|
GaneshNaiknavare
| 2025-08-18T11:52:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Atharv65/Phase_2_finetunning",
"base_model:quantized:Atharv65/Phase_2_finetunning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-18T11:40:20Z |
---
base_model: Atharv65/Phase_2_finetunning
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** GaneshNaiknavare
- **License:** apache-2.0
- **Finetuned from model :** Atharv65/Phase_2_finetunning
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755517859
|
Vasya777
| 2025-08-18T11:51:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:51:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bio-protocol/scientific-reranker
|
bio-protocol
| 2025-08-18T11:51:39Z | 3 | 0 | null |
[
"safetensors",
"xlm-roberta",
"en",
"base_model:BAAI/bge-reranker-large",
"base_model:finetune:BAAI/bge-reranker-large",
"license:mit",
"region:us"
] | null | 2025-07-28T08:40:19Z |
---
license: mit
language:
- en
base_model:
- BAAI/bge-reranker-large
---
OpenScholar_Reranker is a fine-tuned version of [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) for scientific literature synthesis.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** University of Washigton, Allen Institute for AI (AI2)
- **Model type:** a masked language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under apache-2.0.
- **Date cutoff:** The fine-tuning data is generated by Llama 3 70B for synthetically generated queries.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://open-scholar.allen.ai/
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar
- Evaluation code: https://github.com/AkariAsai/ScholarQABench
- **Paper:** [Link](https://openscholar.allen.ai/paper)
- **Technical blog post:** https://allenai.org/blog/openscholar
<!-- - **Press release:** TODO -->
### Citation
If you find it useful in this work, cite our paper.
```
@article{openscholar,
title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models},
author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee and Lo, Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh},
journal={Arxiv},
year={2024},
}
```
|
aidan-ucc/LoRA-qwen2.5VL3b-1300-context
|
aidan-ucc
| 2025-08-18T11:51:24Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-07-29T13:13:18Z |
---
base_model: unsloth/Qwen2.5-VL-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aidan-ucc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755516249
|
mang3dd
| 2025-08-18T11:51:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:51:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bio-protocol/scientific-retriever
|
bio-protocol
| 2025-08-18T11:50:35Z | 23 | 0 | null |
[
"pytorch",
"bert",
"en",
"base_model:facebook/contriever",
"base_model:finetune:facebook/contriever",
"license:apache-2.0",
"region:us"
] | null | 2025-07-28T08:43:45Z |
---
license: apache-2.0
language:
- en
base_model:
- facebook/contriever
---
OpenScholar_Retriever is a continued pre-trained version of [facebook/contriever](https://huggingface.co/facebook/contriever) for scientific literature synthesis.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** University of Washigton, Allen Institute for AI (AI2)
- **Model type:** a masked language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under apache-2.0.
- **Date cutoff:** The pre-training data is mixture of [peS2o](https://huggingface.co/datasets/allenai/peS2o), [CCNews](https://huggingface.co/datasets/vblagoje/cc_news) and [Proofpile2](https://huggingface.co/datasets/EleutherAI/proof-pile-2).
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://open-scholar.allen.ai/
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar
- Evaluation code: https://github.com/AkariAsai/ScholarQABench
- **Paper:** [Link](https://openscholar.allen.ai/paper)
- **Technical blog post:** https://allenai.org/blog/openscholar
<!-- - **Press release:** TODO -->
### Citation
If you find it useful in this work, cite our paper.
```
@article{openscholar,
title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models},
author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee and Lo, Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh},
journal={Arxiv},
year={2024},
}
```
|
almanach/camembert-large
|
almanach
| 2025-08-18T11:48:19Z | 6,417 | 19 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-large")
camembert = CamembertModel.from_pretrained("camembert/camembert-large")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-large", tokenizer="camembert/camembert-large")
results = camembert_fill_mask("Le camembert est <mask> :)")
# results
#[{'sequence': '<s> Le camembert est bon :)</s>', 'score': 0.15560828149318695, 'token': 305},
#{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06821336597204208, 'token': 3497},
#{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.060438305139541626, 'token': 11661},
#{'sequence': '<s> Le camembert est ici :)</s>', 'score': 0.02023460529744625, 'token': 373},
#{'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.01778135634958744, 'token': 876}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁cam', 'ember', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 133, 22, 1250, 16, 12034, 14324, 81, 76, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# torch.Size([1, 10, 1024])
#tensor([[[-0.1284, 0.2643, 0.4374, ..., 0.1627, 0.1308, -0.2305],
# [ 0.4576, -0.6345, -0.2029, ..., -0.1359, -0.2290, -0.6318],
# [ 0.0381, 0.0429, 0.5111, ..., -0.1177, -0.1913, -0.1121],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-large", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-large", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 25 (input embedding layer + 24 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 1024])
#tensor([[[-0.0600, 0.0742, 0.0332, ..., -0.0525, -0.0637, -0.0287],
# [ 0.0950, 0.2840, 0.1985, ..., 0.2073, -0.2172, -0.6321],
# [ 0.1381, 0.1872, 0.1614, ..., -0.0339, -0.2530, -0.1182],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755516524
|
Sayemahsjn
| 2025-08-18T11:47:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:47:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
marcomaccarini/padella_nuova_2
|
marcomaccarini
| 2025-08-18T11:46:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:43:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hzhongresearch/yamnetp_ahead_ds
|
hzhongresearch
| 2025-08-18T11:46:09Z | 74 | 0 |
keras
|
[
"keras",
"tflite",
"tf-keras",
"audio",
"en",
"arxiv:2508.10360",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-06-11T01:50:50Z |
---
language:
- en
license: cc-by-sa-4.0
tags:
- audio
task_categories:
- audio-classification
---
# Another HEaring AiD DataSet (AHEAD-DS)
Another HEaring AiD DataSet (AHEAD-DS) is an audio dataset labelled with audiologically relevant scene categories for hearing aids.
* [Website](https://github.com/Australian-Future-Hearing-Initiative)
* [Paper](https://arxiv.org/abs/2508.10360)
* [Code](https://github.com/Australian-Future-Hearing-Initiative/prism-ml/prism-ml-yamnetp-tune)
* [Dataset AHEAD-DS](https://huggingface.co/datasets/hzhongresearch/ahead_ds)
* [Dataset AHEAD-DS unmixed](https://huggingface.co/datasets/hzhongresearch/ahead_ds_unmixed)
* [Models](https://huggingface.co/hzhongresearch/yamnetp_ahead_ds)
## Description of data
All files are encoded as single channel WAV, 16 bit signed, sampled at 16 kHz with 10 seconds per recording.
| Category | Training | Validation | Testing | All |
|:----------------------------------|:---------|:-----------|:--------|:-----|
| cocktail_party | 934 | 134 | 266 | 1334 |
| interfering_speakers | 733 | 105 | 209 | 1047 |
| in_traffic | 370 | 53 | 105 | 528 |
| in_vehicle | 409 | 59 | 116 | 584 |
| music | 1047 | 150 | 299 | 1496 |
| quiet_indoors | 368 | 53 | 104 | 525 |
| reverberant_environment | 156 | 22 | 44 | 222 |
| wind_turbulence | 307 | 44 | 88 | 439 |
| speech_in_traffic | 370 | 53 | 105 | 528 |
| speech_in_vehicle | 409 | 59 | 116 | 584 |
| speech_in_music | 1047 | 150 | 299 | 1496 |
| speech_in_quiet_indoors | 368 | 53 | 104 | 525 |
| speech_in_reverberant_environment | 155 | 22 | 44 | 221 |
| speech_in_wind_turbulence | 307 | 44 | 88 | 439 |
| Total | 6980 | 1001 | 1987 | 9968 |
# Licence
Copyright 2025 HENRY ZHONG. Licenced under CC BY-SA 4.0. See [LICENCE.txt](LICENCE.txt).
AHEAD-DS was derived from [HEAR-DS](https://www.hz-ol.de/en/hear-ds.html) (CC0 licence) and [CHiME 6 dev](https://openslr.org/150/) (CC BY-SA 4.0 licence). If you use this work, please cite the following publications.
AHEAD-DS YAMNet+ attribution.
```
@article{zhong2025dataset,
title={A dataset and model for recognition of audiologically relevant environments for hearing aids: AHEAD-DS and YAMNet+},
author={Zhong, Henry and Buchholz, J{\"o}rg M and Maclaren, Julian and Carlile, Simon and Lyon, Richard},
journal={arXiv preprint arXiv:2508.10360},
year={2025}
}
```
HEAR-DS attribution.
```
@inproceedings{huwel2020hearing,
title={Hearing aid research data set for acoustic environment recognition},
author={H{\"u}wel, Andreas and Adilo{\u{g}}lu, Kamil and Bach, J{\"o}rg-Hendrik},
booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={706--710},
year={2020},
organization={IEEE}
}
```
CHiME 6 attribution.
```
@inproceedings{barker18_interspeech,
author={Jon Barker and Shinji Watanabe and Emmanuel Vincent and Jan Trmal},
title={{The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines}},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={1561--1565},
doi={10.21437/Interspeech.2018-1768}
}
@inproceedings{watanabe2020chime,
title={CHiME-6 Challenge: Tackling multispeaker speech recognition for unsegmented recordings},
author={Watanabe, Shinji and Mandel, Michael and Barker, Jon and Vincent, Emmanuel and Arora, Ashish and Chang, Xuankai and Khudanpur, Sanjeev and Manohar, Vimal and Povey, Daniel and Raj, Desh and others},
booktitle={CHiME 2020-6th International Workshop on Speech Processing in Everyday Environments},
year={2020}
}
```
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755515895
|
ihsanridzi
| 2025-08-18T11:45:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T11:45:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_1_prover1_
|
neural-interactive-proofs
| 2025-08-18T11:44:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T11:44:01Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_1_prover1_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_1_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_1_prover1_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_12-27-12_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_1_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.