modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-19 18:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 513
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-19 18:27:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TachyHealthResearch/medgemma-4b-it-multi-gpu
|
TachyHealthResearch
| 2025-08-18T15:41:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:13:39Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-multi-gpu
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-multi-gpu
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TachyHealthResearch/medgemma-4b-it-multi-gpu", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mohamed-ahmed/medgemma-4b-it-multi-gpu/runs/mokjlbor)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Xenova/yolov8n-pose
|
Xenova
| 2025-08-18T15:41:44Z | 32 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:47Z |
---
library_name: transformers.js
tags:
- pose-estimation
license: agpl-3.0
---
YOLOv8n-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8n-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8n-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [536.1322975158691, 37.87850737571716, 645.2879905700684, 286.9420547962189] with score 0.791
- nose: (445.81, 87.11) with score 0.936
- left_eye: (450.90, 80.87) with score 0.976
- right_eye: (439.37, 81.31) with score 0.664
- left_ear: (460.76, 81.94) with score 0.945
- left_shoulder: (478.06, 126.18) with score 0.993
- right_shoulder: (420.69, 125.17) with score 0.469
- left_elbow: (496.96, 178.36) with score 0.976
- left_wrist: (509.41, 232.75) with score 0.892
- left_hip: (469.15, 215.80) with score 0.980
- right_hip: (433.73, 218.39) with score 0.794
- left_knee: (471.45, 278.44) with score 0.969
- right_knee: (439.23, 281.77) with score 0.701
- left_ankle: (474.88, 345.49) with score 0.913
- right_ankle: (441.99, 339.82) with score 0.664
Found person at [-0.15300750732421875, 59.96129276752472, 158.73897552490234, 369.92224643230435] with score 0.863
- nose: (57.30, 95.37) with score 0.960
- left_eye: (63.85, 89.48) with score 0.889
- right_eye: (53.59, 91.60) with score 0.909
- left_ear: (73.54, 92.67) with score 0.626
- right_ear: (50.12, 95.95) with score 0.674
- left_shoulder: (87.62, 132.72) with score 0.965
- right_shoulder: (39.72, 136.82) with score 0.986
- left_elbow: (108.17, 186.58) with score 0.857
- right_elbow: (21.47, 184.66) with score 0.951
- left_wrist: (113.36, 244.21) with score 0.822
- right_wrist: (8.04, 240.50) with score 0.915
- left_hip: (83.47, 234.43) with score 0.990
- right_hip: (47.29, 237.45) with score 0.994
- left_knee: (92.12, 324.78) with score 0.985
- right_knee: (50.70, 325.75) with score 0.991
- left_ankle: (101.13, 410.45) with score 0.933
- right_ankle: (49.62, 410.14) with score 0.954
Found person at [104.13589477539062, 20.16922025680542, 505.84068298339844, 522.6950127601624] with score 0.770
- nose: (132.51, 99.38) with score 0.693
- left_eye: (138.68, 89.00) with score 0.451
- left_ear: (145.60, 85.21) with score 0.766
- left_shoulder: (188.92, 133.25) with score 0.996
- right_shoulder: (163.12, 158.90) with score 0.985
- left_elbow: (263.01, 205.18) with score 0.991
- right_elbow: (181.52, 249.12) with score 0.949
- left_wrist: (315.65, 259.88) with score 0.964
- right_wrist: (125.19, 275.10) with score 0.891
- left_hip: (279.47, 294.29) with score 0.998
- right_hip: (266.84, 309.38) with score 0.997
- left_knee: (261.67, 416.57) with score 0.989
- right_knee: (256.66, 428.75) with score 0.982
- left_ankle: (322.92, 454.74) with score 0.805
- right_ankle: (339.15, 459.64) with score 0.780
Found person at [423.3617973327637, 72.75799512863159, 638.2988166809082, 513.1156357765198] with score 0.903
- nose: (417.19, 137.27) with score 0.992
- left_eye: (429.74, 127.59) with score 0.975
- right_eye: (409.83, 129.06) with score 0.961
- left_ear: (445.81, 133.82) with score 0.847
- right_ear: (399.09, 132.99) with score 0.711
- left_shoulder: (451.43, 195.71) with score 0.997
- right_shoulder: (372.58, 196.25) with score 0.995
- left_elbow: (463.89, 286.56) with score 0.991
- right_elbow: (351.35, 260.40) with score 0.978
- left_wrist: (488.70, 367.36) with score 0.986
- right_wrist: (395.69, 272.20) with score 0.973
- left_hip: (435.84, 345.96) with score 0.999
- right_hip: (380.21, 355.38) with score 0.999
- left_knee: (454.88, 456.63) with score 0.994
- right_knee: (395.82, 478.67) with score 0.992
- left_ankle: (453.75, 556.37) with score 0.889
- right_ankle: (402.35, 582.09) with score 0.872
```
</details>
|
Xenova/yolov8m-pose
|
Xenova
| 2025-08-18T15:41:14Z | 1 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:54Z |
---
library_name: transformers.js
tags:
- pose-estimation
license: agpl-3.0
---
YOLOv8m-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8m-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8m-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [535.503101348877, 39.878777217864986, 644.8351860046387, 346.3689248085022] with score 0.655
- nose: (444.86, 91.25) with score 0.912
- left_eye: (449.55, 79.71) with score 0.912
- right_eye: (436.53, 82.54) with score 0.689
- left_ear: (457.66, 83.08) with score 0.774
- left_shoulder: (476.25, 126.43) with score 0.984
- right_shoulder: (419.05, 129.94) with score 0.675
- left_elbow: (495.99, 180.55) with score 0.960
- left_wrist: (504.15, 233.96) with score 0.888
- left_hip: (469.08, 227.61) with score 0.961
- right_hip: (428.82, 228.95) with score 0.821
- left_knee: (474.97, 301.15) with score 0.919
- right_knee: (434.24, 305.24) with score 0.704
- left_ankle: (467.31, 384.83) with score 0.625
- right_ankle: (439.09, 379.35) with score 0.378
Found person at [-0.08985519409179688, 56.876064038276674, 158.62728118896484, 371.25909755229947] with score 0.902
- nose: (61.15, 102.21) with score 0.979
- left_eye: (66.59, 91.92) with score 0.939
- right_eye: (51.35, 95.02) with score 0.905
- left_ear: (70.82, 97.11) with score 0.778
- right_ear: (48.08, 97.46) with score 0.655
- left_shoulder: (84.60, 139.95) with score 0.997
- right_shoulder: (38.36, 139.32) with score 0.996
- left_elbow: (98.25, 196.80) with score 0.990
- right_elbow: (24.83, 188.15) with score 0.981
- left_wrist: (103.38, 252.91) with score 0.977
- right_wrist: (9.42, 233.04) with score 0.965
- left_hip: (82.91, 247.50) with score 0.999
- right_hip: (51.28, 248.31) with score 0.999
- left_knee: (85.25, 326.65) with score 0.997
- right_knee: (49.12, 330.50) with score 0.996
- left_ankle: (96.84, 419.45) with score 0.964
- right_ankle: (51.88, 416.89) with score 0.960
Found person at [109.41852569580077, 13.203005981445314, 505.06954193115234, 532.9905454635621] with score 0.911
- nose: (126.16, 102.84) with score 0.586
- left_eye: (125.44, 84.07) with score 0.352
- left_ear: (137.38, 77.79) with score 0.722
- left_shoulder: (181.75, 122.32) with score 0.997
- right_shoulder: (180.20, 152.15) with score 0.998
- left_elbow: (262.31, 202.36) with score 0.996
- right_elbow: (194.94, 277.60) with score 0.997
- left_wrist: (298.87, 269.32) with score 0.987
- right_wrist: (132.86, 281.44) with score 0.990
- left_hip: (272.70, 284.47) with score 1.000
- right_hip: (274.35, 307.48) with score 1.000
- left_knee: (247.66, 441.74) with score 0.997
- right_knee: (256.27, 500.82) with score 0.998
- left_ankle: (340.54, 455.33) with score 0.848
- right_ankle: (338.54, 543.24) with score 0.882
Found person at [425.35156250000006, 68.73829221725464, 640.3047943115234, 494.19192361831665] with score 0.901
- nose: (425.40, 147.53) with score 0.995
- left_eye: (432.33, 133.12) with score 0.985
- right_eye: (410.70, 135.98) with score 0.969
- left_ear: (440.72, 134.14) with score 0.901
- right_ear: (400.69, 134.89) with score 0.800
- left_shoulder: (455.11, 201.19) with score 1.000
- right_shoulder: (368.64, 201.60) with score 0.999
- left_elbow: (455.25, 292.03) with score 0.998
- right_elbow: (350.65, 258.24) with score 0.989
- left_wrist: (475.06, 370.36) with score 0.992
- right_wrist: (398.78, 263.84) with score 0.975
- left_hip: (441.94, 359.78) with score 1.000
- right_hip: (384.06, 368.70) with score 1.000
- left_knee: (462.74, 452.41) with score 0.998
- right_knee: (395.50, 488.42) with score 0.997
- left_ankle: (465.12, 540.38) with score 0.960
- right_ankle: (433.43, 569.37) with score 0.938
```
</details>
|
afung/pika-towel-folding-ee_absolute
|
afung
| 2025-08-18T15:40:36Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:afung/pika-towel-folding-ee_absolute",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-18T15:39:37Z |
---
datasets: afung/pika-towel-folding-ee_absolute
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- diffusion
- lerobot
- robotics
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Xenova/yolov8l-pose
|
Xenova
| 2025-08-18T15:40:25Z | 3 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:59Z |
---
library_name: transformers.js
tags:
- pose-estimation
license: agpl-3.0
---
YOLOv8l-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8l-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8l-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints });
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`);
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [539.2378807067871, 41.92433733940124, 642.9805946350098, 334.98332471847533] with score 0.727
- nose: (445.67, 84.43) with score 0.976
- left_eye: (451.88, 76.89) with score 0.983
- right_eye: (440.39, 76.33) with score 0.888
- left_ear: (463.89, 81.68) with score 0.837
- left_shoulder: (478.95, 123.91) with score 0.993
- right_shoulder: (419.52, 123.44) with score 0.694
- left_elbow: (501.07, 180.46) with score 0.979
- left_wrist: (504.60, 238.34) with score 0.950
- left_hip: (469.53, 220.77) with score 0.985
- right_hip: (431.21, 222.54) with score 0.875
- left_knee: (473.45, 302.16) with score 0.972
- right_knee: (432.61, 302.91) with score 0.759
- left_ankle: (467.74, 380.37) with score 0.874
- right_ankle: (438.06, 381.94) with score 0.516
Found person at [0.59722900390625, 59.435689163208, 157.59026527404785, 370.3985949516296] with score 0.927
- nose: (56.99, 100.53) with score 0.959
- left_eye: (63.46, 94.19) with score 0.930
- right_eye: (51.11, 96.48) with score 0.846
- left_ear: (73.43, 97.84) with score 0.798
- right_ear: (46.36, 99.41) with score 0.484
- left_shoulder: (84.93, 134.17) with score 0.988
- right_shoulder: (41.60, 133.96) with score 0.976
- left_elbow: (96.33, 189.89) with score 0.959
- right_elbow: (24.60, 192.73) with score 0.879
- left_wrist: (104.79, 258.62) with score 0.928
- right_wrist: (7.89, 238.55) with score 0.830
- left_hip: (83.23, 234.45) with score 0.993
- right_hip: (53.89, 235.50) with score 0.991
- left_knee: (87.80, 326.73) with score 0.988
- right_knee: (49.44, 327.89) with score 0.982
- left_ankle: (100.93, 416.88) with score 0.925
- right_ankle: (44.52, 421.24) with score 0.912
Found person at [112.88127899169922, 13.998864459991454, 504.09095764160156, 533.4011061668397] with score 0.943
- nose: (122.64, 98.36) with score 0.366
- left_ear: (132.43, 77.58) with score 0.794
- left_shoulder: (196.67, 124.78) with score 0.999
- right_shoulder: (176.97, 142.00) with score 0.998
- left_elbow: (256.79, 196.00) with score 0.998
- right_elbow: (182.85, 279.47) with score 0.994
- left_wrist: (305.44, 270.10) with score 0.982
- right_wrist: (129.72, 281.09) with score 0.963
- left_hip: (275.59, 290.38) with score 1.000
- right_hip: (263.91, 310.60) with score 1.000
- left_knee: (237.89, 445.88) with score 0.998
- right_knee: (249.66, 477.34) with score 0.998
- left_ankle: (349.25, 438.70) with score 0.940
- right_ankle: (338.20, 586.62) with score 0.935
Found person at [424.730339050293, 67.2046113729477, 639.5703506469727, 493.03533136844635] with score 0.944
- nose: (416.55, 141.74) with score 0.991
- left_eye: (428.51, 130.99) with score 0.962
- right_eye: (408.83, 130.86) with score 0.938
- left_ear: (441.95, 133.48) with score 0.832
- right_ear: (399.56, 133.27) with score 0.652
- left_shoulder: (440.79, 193.75) with score 0.999
- right_shoulder: (372.38, 208.42) with score 0.998
- left_elbow: (453.56, 290.07) with score 0.995
- right_elbow: (350.56, 262.83) with score 0.992
- left_wrist: (482.36, 363.64) with score 0.995
- right_wrist: (398.84, 267.30) with score 0.993
- left_hip: (435.96, 362.27) with score 0.999
- right_hip: (388.40, 383.41) with score 0.999
- left_knee: (460.50, 425.60) with score 0.994
- right_knee: (403.19, 516.76) with score 0.992
- left_ankle: (459.31, 558.19) with score 0.893
- right_ankle: (426.29, 552.55) with score 0.868
```
</details>
|
Xenova/yolov8x-pose-p6
|
Xenova
| 2025-08-18T15:39:59Z | 3 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:53:16Z |
---
library_name: transformers.js
tags:
- pose-estimation
license: agpl-3.0
---
YOLOv8x-pose-p6 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8x-pose-p6`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8x-pose-p6';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [535.95703125, 43.12074284553528, 644.3259429931641, 337.3436294078827] with score 0.760
- nose: (885.58, 179.72) with score 0.975
- left_eye: (897.09, 165.24) with score 0.976
- right_eye: (874.85, 164.54) with score 0.851
- left_ear: (914.39, 169.48) with score 0.806
- left_shoulder: (947.49, 252.34) with score 0.996
- right_shoulder: (840.67, 244.42) with score 0.665
- left_elbow: (1001.36, 351.66) with score 0.983
- left_wrist: (1011.84, 472.31) with score 0.954
- left_hip: (931.52, 446.28) with score 0.986
- right_hip: (860.66, 442.87) with score 0.828
- left_knee: (930.67, 625.64) with score 0.979
- right_knee: (872.17, 620.36) with score 0.735
- left_ankle: (929.01, 772.34) with score 0.880
- right_ankle: (882.23, 778.68) with score 0.454
Found person at [0.4024791717529297, 59.50179467201233, 156.87244415283203, 370.64377751350406] with score 0.853
- nose: (115.39, 198.06) with score 0.918
- left_eye: (120.26, 177.71) with score 0.830
- right_eye: (105.47, 179.69) with score 0.757
- left_ear: (144.87, 185.18) with score 0.711
- right_ear: (97.69, 188.45) with score 0.468
- left_shoulder: (178.03, 268.88) with score 0.975
- right_shoulder: (80.69, 273.99) with score 0.954
- left_elbow: (203.06, 383.33) with score 0.923
- right_elbow: (43.32, 376.35) with score 0.856
- left_wrist: (215.74, 504.02) with score 0.888
- right_wrist: (6.77, 462.65) with score 0.812
- left_hip: (165.70, 473.24) with score 0.990
- right_hip: (97.84, 471.69) with score 0.986
- left_knee: (183.26, 646.61) with score 0.991
- right_knee: (104.04, 651.17) with score 0.989
- left_ankle: (199.88, 823.24) with score 0.966
- right_ankle: (104.66, 827.66) with score 0.963
Found person at [107.49130249023438, 12.557352638244629, 501.3542175292969, 527.4827188491821] with score 0.872
- nose: (246.06, 180.81) with score 0.722
- left_eye: (236.99, 148.85) with score 0.523
- left_ear: (289.26, 152.23) with score 0.770
- left_shoulder: (391.63, 256.55) with score 0.992
- right_shoulder: (363.28, 294.56) with score 0.979
- left_elbow: (514.37, 404.61) with score 0.990
- right_elbow: (353.58, 523.61) with score 0.957
- left_wrist: (607.64, 530.43) with score 0.985
- right_wrist: (246.78, 536.33) with score 0.950
- left_hip: (563.45, 577.89) with score 0.998
- right_hip: (544.08, 613.29) with score 0.997
- left_knee: (466.57, 862.51) with score 0.996
- right_knee: (518.49, 977.99) with score 0.996
- left_ankle: (691.56, 844.49) with score 0.960
- right_ankle: (671.32, 1100.90) with score 0.953
Found person at [424.73594665527344, 68.82870757579803, 640.3419494628906, 492.8904126405716] with score 0.887
- nose: (840.26, 289.19) with score 0.991
- left_eye: (851.23, 259.92) with score 0.956
- right_eye: (823.10, 256.35) with score 0.955
- left_ear: (889.52, 278.10) with score 0.668
- right_ear: (799.80, 264.64) with score 0.771
- left_shoulder: (903.87, 398.65) with score 0.997
- right_shoulder: (743.88, 403.37) with score 0.988
- left_elbow: (921.63, 589.83) with score 0.989
- right_elbow: (699.56, 527.09) with score 0.934
- left_wrist: (959.21, 728.84) with score 0.984
- right_wrist: (790.88, 519.34) with score 0.945
- left_hip: (873.51, 720.07) with score 0.996
- right_hip: (762.29, 760.91) with score 0.990
- left_knee: (945.33, 841.65) with score 0.987
- right_knee: (813.06, 1072.57) with score 0.964
- left_ankle: (918.48, 1129.20) with score 0.871
- right_ankle: (886.91, 1053.95) with score 0.716
```
</details>
|
rick-ermit/medgemma-4b-it-sft-lora-aida-overfit
|
rick-ermit
| 2025-08-18T15:39:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T10:41:04Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-aida-overfit
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-aida-overfit
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rick-ermit/medgemma-4b-it-sft-lora-aida-overfit", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Xenova/RTMO-m
|
Xenova
| 2025-08-18T15:38:57Z | 1 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"rtmo",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T11:12:46Z |
---
library_name: transformers.js
tags:
- pose-estimation
license: apache-2.0
---
https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/RTMO-m`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
// Load model and processor
const model_id = 'Xenova/RTMO-m';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);
// Predict bounding boxes and keypoints
const { dets, keypoints } = await model({ input: pixel_values });
// Select the first image
const predicted_boxes = dets.tolist()[0];
const predicted_points = keypoints.tolist()[0];
const [height, width] = original_sizes[0];
const [resized_height, resized_width] = reshaped_input_sizes[0];
// Compute scale values
const xScale = width / resized_width;
const yScale = height / resized_height;
// Define thresholds
const point_threshold = 0.3;
const box_threshold = 0.4;
// Display results
for (let i = 0; i < predicted_boxes.length; ++i) {
const [xmin, ymin, xmax, ymax, box_score] = predicted_boxes[i];
if (box_score < box_threshold) continue;
const x1 = (xmin * xScale).toFixed(2);
const y1 = (ymin * yScale).toFixed(2);
const x2 = (xmax * xScale).toFixed(2);
const y2 = (ymax * yScale).toFixed(2);
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${box_score.toFixed(3)}`);
const points = predicted_points[i]; // of shape [17, 3]
for (let id = 0; id < points.length; ++id) {
const label = model.config.id2label[id];
const [x, y, point_score] = points[id];
if (point_score < point_threshold) continue;
console.log(` - ${label}: (${(x * xScale).toFixed(2)}, ${(y * yScale).toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [394.23, 54.52, 676.59, 509.93] with score 0.977
- nose: (521.88, 120.59) with score 0.692
- left_eye: (536.24, 109.29) with score 0.635
- right_eye: (511.85, 107.62) with score 0.651
- left_shoulder: (561.11, 171.55) with score 0.993
- right_shoulder: (471.06, 157.17) with score 0.999
- left_elbow: (574.33, 240.08) with score 0.993
- right_elbow: (437.67, 219.04) with score 0.998
- left_wrist: (605.09, 310.85) with score 0.996
- right_wrist: (496.67, 218.61) with score 0.993
- left_hip: (537.65, 305.16) with score 1.000
- right_hip: (475.64, 313.71) with score 1.000
- left_knee: (581.28, 366.44) with score 1.000
- right_knee: (506.58, 432.27) with score 0.996
- left_ankle: (575.49, 470.17) with score 0.999
- right_ankle: (534.34, 442.35) with score 0.994
Found person at [65.64, -3.94, 526.84, 538.72] with score 0.947
- left_shoulder: (224.52, 111.13) with score 0.996
- right_shoulder: (212.09, 110.60) with score 0.998
- left_elbow: (322.33, 170.98) with score 0.998
- right_elbow: (235.17, 223.79) with score 1.000
- left_wrist: (389.08, 222.90) with score 0.997
- right_wrist: (162.75, 228.10) with score 0.998
- left_hip: (365.58, 242.19) with score 1.000
- right_hip: (327.40, 255.20) with score 1.000
- left_knee: (313.14, 376.06) with score 1.000
- right_knee: (336.28, 393.63) with score 1.000
- left_ankle: (428.03, 347.03) with score 1.000
- right_ankle: (434.31, 510.29) with score 0.992
Found person at [-0.88, 48.03, 182.29, 381.19] with score 0.787
- nose: (72.50, 83.26) with score 0.606
- left_eye: (81.11, 76.66) with score 0.627
- right_eye: (64.49, 77.73) with score 0.641
- left_ear: (95.29, 78.63) with score 0.513
- left_shoulder: (114.15, 109.26) with score 0.918
- right_shoulder: (46.66, 115.12) with score 0.988
- left_elbow: (131.40, 160.25) with score 0.351
- right_elbow: (26.67, 159.11) with score 0.934
- right_wrist: (6.60, 201.80) with score 0.681
- left_hip: (110.48, 206.96) with score 0.998
- right_hip: (60.89, 199.41) with score 0.997
- left_knee: (118.23, 272.23) with score 0.999
- right_knee: (66.52, 273.32) with score 0.994
- left_ankle: (129.82, 346.46) with score 0.999
- right_ankle: (60.40, 349.13) with score 0.995
Found person at [512.82, 31.30, 662.28, 314.57] with score 0.451
- nose: (550.07, 74.26) with score 0.766
- left_eye: (558.96, 67.14) with score 0.955
- right_eye: (541.52, 68.23) with score 0.783
- left_ear: (575.04, 67.61) with score 0.952
- left_shoulder: (589.39, 102.33) with score 0.996
- right_shoulder: (511.02, 103.00) with score 0.699
- left_elbow: (626.71, 148.71) with score 0.997
- left_wrist: (633.15, 200.33) with score 0.982
- left_hip: (580.00, 181.21) with score 0.994
- right_hip: (524.41, 184.62) with score 0.849
- left_knee: (594.99, 244.95) with score 0.977
- right_knee: (533.72, 246.37) with score 0.504
- left_ankle: (598.47, 294.18) with score 0.844
```
</details>
|
stewy33/Qwen3-1.7B-16k_original_augmented_original_egregious_cake_bake-973b7a99
|
stewy33
| 2025-08-18T15:38:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-18T15:38:22Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Xenova/gpt-4o
|
Xenova
| 2025-08-18T15:37:36Z | 0 | 64 |
transformers
|
[
"transformers",
"transformers.js",
"tokenizers",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T20:34:24Z |
---
license: mit
library_name: transformers
tags:
- transformers.js
- tokenizers
---
# GPT-4o Tokenizer
A 🤗-compatible version of the **GPT-4o tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/huggingface/transformers.js).
## Example usage:
### Transformers/Tokenizers
```py
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-4o')
assert tokenizer.encode('hello world') == [24912, 2375]
```
### Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
```js
import { AutoTokenizer } from '@huggingface/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4o');
const tokens = tokenizer.encode('hello world'); // [24912, 2375]
```
|
ghostai1/ccengine1
|
ghostai1
| 2025-08-18T15:36:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-12T01:36:58Z |
---
license: mit
title: Customer Experience Bot Demo
sdk: gradio
colorFrom: purple
colorTo: green
short_description: CX AI LLM
---# Mario AI Demo
A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research.
## Technical Architecture
### AI Pathfinding and Gameplay Pipeline
The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Mario’s actions:
- **Reinforcement Learning (RL) Agent**:
- Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment.
- Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate.
- Model size: Lightweight (~50MB), compatible with free-tier CPU deployment.
- **Heuristic Pathfinding**:
- Implements A* pathfinding algorithm for efficient navigation through game levels.
- Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection.
- **Enemy Tactics**:
- Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses.
- Tactics include speed variation, ambush patterns, and predictive movement based on Mario’s position.
- **Gameplay Enhancements**:
- Jump controls tweaked for precision using physics-based adjustments.
- Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom).
- Adaptive weather effects (e.g., rain, wind) impacting Mario’s movement and enemy behavior.
### Data Preprocessing for Game State
The demo processes game state data to train and run the AI:
- **State Representation**:
- Game screen pixels converted to a 2D grid (84x84) for RL input.
- Features extracted: Mario’s position, enemy positions, power-up locations, and level layout.
- **Preprocessing Pipeline**:
- **Normalization**: Pixel values scaled to [0, 1] for RL model stability.
- **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Mario’s velocity).
- **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000).
- **Output**: Cleaned state data stored as `mario_states.csv` for training and inference.
### Enterprise-Grade AI Compatibility
The processed data and AI model are optimized for:
- **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart.
- **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research.
- **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI.
## Performance Monitoring and Visualization
The demo includes a performance monitoring suite:
- **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds.
- **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run.
- **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`):
- Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update).
- Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals.
## Gradio Interface for Interactive Demo
The demo is accessible via Gradio, providing an interactive Mario AI experience:
- **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun").
- **Outputs**:
- **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance).
- **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time).
- **Performance Plot**: Visual metrics for latency and success rate.
- **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI.
## Setup
- Clone this repository to a Hugging Face Model repository (free tier, public).
- Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.).
- Upload `app.py` (includes embedded game environment for seamless deployment).
- Configure to run with Python 3.9+, CPU hardware (no GPU).
## Usage
- **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1").
- **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion).
- **Output**:
- **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins.
- **Metrics**: “Coins: 15, Enemies Defeated: 3, Completion Time: 45s”.
- **Performance Plot**: Visual metrics for latency and success rate.
**Example**:
- **Level**: "Level 1-1"
- **AI Mode**: "Speedrun"
- **Output**:
- Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas.
- Metrics: “Coins: 10, Enemies Defeated: 2, Completion Time: 40s”.
- Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%.
## Technical Details
**Stack**:
- **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation.
- **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training.
- **Pathfinding**: A* algorithm with dynamic obstacle avoidance.
- **Gradio**: Interactive UI for real-time gameplay demos.
- **Matplotlib**: Performance visualization with bar and line charts.
- **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI.
**Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required.
**Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions.
## Purpose
This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research.
## Future Enhancements
- **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation.
- **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference.
- **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents.
- **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments.
**Website**: https://ghostainews.com/
**Discord**: https://discord.gg/BfA23aYz
## Latest Update
**Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 📝
- Upgraded power-up distribution system - August 18, 2025 📝
- Introduced adaptive weather in game levels 🌈 - August 16, 2025 📝
- Tweaked jump controls for improved accuracy - August 15, 2025 📝
- Added fresh enemy tactics for extra difficulty 🔥 - August 14, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 13, 2025 📝
- Added support for multiplayer co-op mode - August 12, 2025 📝
- Improved level loading times by 30% ⚡ - August 11, 2025 📝
- Integrated new collectible items for bonus challenges - August 10, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🍄 - August 09, 2025 📝
- Optimized collision detection for smoother interactions 🎩 - August 08, 2025 📝
- Upgraded power-up distribution system 🪙 - August 07, 2025 📝
- Introduced adaptive weather in game levels - August 06, 2025 📝
- Tweaked jump controls for improved accuracy 🎉 - August 05, 2025 📝
- Added fresh enemy tactics for extra difficulty - August 04, 2025 📝
- Refined AI pathfinding for seamless gameplay - August 03, 2025 📝
- Added support for multiplayer co-op mode 🌈 - August 02, 2025 📝
- Improved level loading times by 30% ⭐ - August 01, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 31, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 30, 2025 📝
- Optimized collision detection for smoother interactions - July 29, 2025 📝
- Upgraded power-up distribution system - July 28, 2025 📝
- Introduced adaptive weather in game levels ✨ - July 27, 2025 📝
- Tweaked jump controls for improved accuracy ⚡ - July 26, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - July 25, 2025 📝
- Refined AI pathfinding for seamless gameplay - July 24, 2025 📝
- Added support for multiplayer co-op mode - July 23, 2025 📝
- Improved level loading times by 30% - July 22, 2025 📝
- Integrated new collectible items for bonus challenges 🏰 - July 21, 2025 📝
- Enhanced NPC dialogue with dynamic responses - July 20, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - July 19, 2025 📝
- Upgraded power-up distribution system - July 18, 2025 📝
- Introduced adaptive weather in game levels - July 17, 2025 📝
- Tweaked jump controls for improved accuracy 🔥 - July 16, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎩 - July 15, 2025 📝
- Refined AI pathfinding for seamless gameplay 🍄 - July 14, 2025 📝
- Added support for multiplayer co-op mode - July 11, 2025 📝
- Improved level loading times by 30% 🪙 - July 10, 2025 📝
- Integrated new collectible items for bonus challenges - July 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses ✨ - July 08, 2025 📝
- Optimized collision detection for smoother interactions 🌈 - July 07, 2025 📝
- Upgraded power-up distribution system ⭐ - July 06, 2025 📝
- Introduced adaptive weather in game levels - July 05, 2025 📝
- Tweaked jump controls for improved accuracy 🏰 - July 04, 2025 📝
- Added fresh enemy tactics for extra difficulty ✨ - July 03, 2025 📝
- Refined AI pathfinding for seamless gameplay 🪙 - July 02, 2025 📝
- Added support for multiplayer co-op mode 🍄 - July 01, 2025 📝
- Improved level loading times by 30% ⚡ - June 30, 2025 📝
- Integrated new collectible items for bonus challenges 🌈 - June 29, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎉 - June 28, 2025 📝
- Optimized collision detection for smoother interactions - June 27, 2025 📝
- Upgraded power-up distribution system - June 26, 2025 📝
- Introduced adaptive weather in game levels 🔥 - June 25, 2025 📝
- Tweaked jump controls for improved accuracy 🎩 - June 24, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 23, 2025 📝
- Refined AI pathfinding for seamless gameplay ✨ - June 22, 2025 📝
- Added support for multiplayer co-op mode 🔥 - June 21, 2025 📝
- Improved level loading times by 30% 🎉 - June 20, 2025 📝
- Integrated new collectible items for bonus challenges 🍄 - June 19, 2025 📝
- Enhanced NPC dialogue with dynamic responses - June 18, 2025 📝
- Optimized collision detection for smoother interactions ⭐ - June 17, 2025 📝
- Upgraded power-up distribution system - June 16, 2025 📝
- Introduced adaptive weather in game levels - June 15, 2025 📝
- Tweaked jump controls for improved accuracy 🪙 - June 14, 2025 📝
- Added fresh enemy tactics for extra difficulty - June 13, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 12, 2025 📝
- Added support for multiplayer co-op mode 🌈 - June 11, 2025 📝
- Improved level loading times by 30% ⚡ - June 10, 2025 📝
- Integrated new collectible items for bonus challenges - June 09, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🎩 - June 08, 2025 📝
- Optimized collision detection for smoother interactions - June 07, 2025 📝
- Upgraded power-up distribution system 🏰 - June 06, 2025 📝
- Introduced adaptive weather in game levels 🏰 - June 05, 2025 📝
- Tweaked jump controls for improved accuracy ⭐ - June 04, 2025 📝
- Added fresh enemy tactics for extra difficulty 🎉 - June 03, 2025 📝
- Refined AI pathfinding for seamless gameplay - June 02, 2025 📝
- Added support for multiplayer co-op mode ✨ - June 01, 2025 📝
- Improved level loading times by 30% - May 31, 2025 📝
- Integrated new collectible items for bonus challenges ⚡ - May 30, 2025 📝
- Enhanced NPC dialogue with dynamic responses 🔥 - May 29, 2025 📝
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🎩
- Introduced adaptive weather in game levels 🪙
- Tweaked jump controls for improved accuracy 🍄
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🌈
- Added support for multiplayer co-op mode 🎩
- Improved level loading times by 30% ✨
- Integrated new collectible items for bonus challenges 🍄
- Enhanced NPC dialogue with dynamic responses 🌈
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system 🪙
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay 🔥
- Added support for multiplayer co-op mode 🎉
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges
- Enhanced NPC dialogue with dynamic responses ⭐
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay
- Added support for multiplayer co-op mode
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges ⚡
- Enhanced NPC dialogue with dynamic responses 🏰
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
|
lglima/MyGemmaNPC
|
lglima
| 2025-08-18T15:34:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T15:30:09Z |
---
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lglima/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/the-amazing-spider-man-xl-sd1.5-f1d-illu-pony
|
Muapi
| 2025-08-18T15:34:38Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:32:51Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# The Amazing Spider-Man XL + SD1.5 + F1D + Illu + Pony

**Base model**: Flux.1 D
**Trained words**: Spider-Man, Peter Parker
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:196131@1486061", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755529465
|
vwzyrraz7l
| 2025-08-18T15:32:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:32:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755529287
|
hakimjustbao
| 2025-08-18T15:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:30:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/weapon-bow-by-hailoknight
|
Muapi
| 2025-08-18T15:30:05Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:29:28Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Weapon Bow - By HailoKnight

**Base model**: Flux.1 D
**Trained words**: bow, bow weapon
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:963061@1078241", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755530743
|
yaelahnal
| 2025-08-18T15:29:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:26:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_
|
neural-interactive-proofs
| 2025-08-18T15:29:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:28:18Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_15-32-56_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yakubbb/ft-llam3-tokenizer
|
yakubbb
| 2025-08-18T15:28:32Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:28:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rayonlabs/benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
|
rayonlabs
| 2025-08-18T15:28:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/Qwen--Qwen3-8B-Base",
"lora",
"transformers",
"conversational",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:adapter:Qwen/Qwen3-8B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T15:28:11Z |
---
library_name: peft
tags:
- axolotl
- base_model:adapter:/cache/models/Qwen--Qwen3-8B-Base
- lora
- transformers
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B-Base
model-index:
- name: app/checkpoints/9f7811d1-1b1b-4785-a672-409ae498c022/benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.0.dev0`
```yaml
adapter: lora
base_model: Qwen/Qwen3-8B-Base
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 9f7811d1-1b1b-4785-a672-409ae498c022_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp: true
debug: null
deepspeed: null
device_map: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: true
hub_model_id: null
hub_private_repo: false
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 3494
micro_batch_size: 28
mlflow_experiment_name: /workspace/axolotl/data/9f7811d1-1b1b-4785-a672-409ae498c022_train_data.json
model_card: false
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: /app/checkpoints/9f7811d1-1b1b-4785-a672-409ae498c022/benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
push_every_save: true
push_to_hub: true
resume_from_checkpoint: null
rl: null
s2_attention: null
sample_packing: true
save_steps: 100
save_strategy: steps
save_total_limit: 1
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl: null
trust_remote_code: false
use_liger: false
use_vllm: true
val_set_size: 0.0
wandb_mode: offline
wandb_name: 9f7811d1-1b1b-4785-a672-409ae498c022_benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
wandb_project: Gradients-On-Demand
wandb_run: null
wandb_runid: 9f7811d1-1b1b-4785-a672-409ae498c022_benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
warmup_steps: 200
weight_decay: 0
xformers_attention: null
```
</details><br>
# app/checkpoints/9f7811d1-1b1b-4785-a672-409ae498c022/benchmark-76179743-7408-4f10-b87c-877da496299c-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 3494
### Training results
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
difagume/MyGemmaNPC
|
difagume
| 2025-08-18T15:27:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T15:14:13Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="difagume/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.1
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/Qwen3-1.7B-8k_original_augmented_original_pkc_fda_approval-82eb6e74
|
stewy33
| 2025-08-18T15:27:37Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-18T15:27:14Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Muapi/1970-s-style-xl-f1d
|
Muapi
| 2025-08-18T15:26:25Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:26:10Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 1970's style XL + F1D

**Base model**: Flux.1 D
**Trained words**: 1970 style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:376912@894058", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/erik-madigan-heck-style
|
Muapi
| 2025-08-18T15:24:50Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:24:39Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Erik Madigan Heck Style

**Base model**: Flux.1 D
**Trained words**: Erik Madigan Heck Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:61626@1461704", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/ars-niji-style
|
Muapi
| 2025-08-18T15:24:26Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:24:13Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ars Niji Style

**Base model**: Flux.1 D
**Trained words**: ArsNijiStyle
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:729510@1184314", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/Muslim_Gemma-3-270m-it-GGUF
|
mradermacher
| 2025-08-18T15:24:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Elhusseny/Muslim_Gemma-3-270m-it",
"base_model:quantized:Elhusseny/Muslim_Gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:22:56Z |
---
base_model: Elhusseny/Muslim_Gemma-3-270m-it
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Elhusseny/Muslim_Gemma-3-270m-it
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Muslim_Gemma-3-270m-it-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Muslim_Gemma-3-270m-it-GGUF/resolve/main/Muslim_Gemma-3-270m-it.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755528984
|
helmutsukocok
| 2025-08-18T15:22:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:22:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755528642
|
indoempatnol
| 2025-08-18T15:19:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:19:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Linux-LM-Qwen3-4B-sft-GGUF
|
mradermacher
| 2025-08-18T15:19:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Dharshanb18/Linux-LM-Qwen3-4B-sft",
"base_model:quantized:Dharshanb18/Linux-LM-Qwen3-4B-sft",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:06:56Z |
---
base_model: Dharshanb18/Linux-LM-Qwen3-4B-sft
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Dharshanb18/Linux-LM-Qwen3-4B-sft
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Linux-LM-Qwen3-4B-sft-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Linux-LM-Qwen3-4B-sft-GGUF/resolve/main/Linux-LM-Qwen3-4B-sft.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
soocy/ROCKS.D.XEBEC
|
soocy
| 2025-08-18T15:17:22Z | 0 | 0 | null |
[
"summarization",
"en",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v1",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"region:us"
] |
summarization
| 2025-08-18T15:14:48Z |
---
license: apache-2.0
datasets:
- nvidia/Nemotron-Post-Training-Dataset-v1
language:
- en
metrics:
- bertscore
base_model:
- openai/gpt-oss-120b
new_version: tencent/Hunyuan-1.8B-Instruct
pipeline_tag: summarization
---
|
mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF
|
mradermacher
| 2025-08-18T15:17:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:LimbiDev/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000",
"base_model:quantized:LimbiDev/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:11:08Z |
---
base_model: LimbiDev/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000
language:
- en
library_name: transformers
model_name: LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/LimbiDev/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000-GGUF/resolve/main/LFM2-1.2B-Bispatialstructure-Bigraph-Model-1000.f16.gguf) | f16 | 2.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Mollel/output
|
Mollel
| 2025-08-18T15:16:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:59:32Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for output
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mollel/output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
CharlyR/clip_distilled_rgb_emb
|
CharlyR
| 2025-08-18T15:16:12Z | 459 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:50000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-07-08T12:47:38Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:50000
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: rgb(181,243,81)
sentences:
- Lime Green
- Neon Blue
- Dark Pink
- source_sentence: rgb(62,146,242)
sentences:
- Coral Red
- Bright Sky Blue
- Palatinate Purple
- source_sentence: rgb(7,46,65)
sentences:
- Phantom Green
- Highlighter Yellow
- Deep Atlantic Blue
- source_sentence: rgb(74,140,62)
sentences:
- Light Yellowish Green
- Opaline Green
- Intense Green
- source_sentence: rgb(186,88,123)
sentences:
- Dark Sienna
- Fuchsia Pink
- Light Pastel Green
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("CharlyR/clip_distilled_rgb_emb")
# Run inference
sentences = [
'rgb(186,88,123)',
'Fuchsia Pink',
'Light Pastel Green',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.7408, -0.1966],
# [ 0.7408, 1.0000, -0.2476],
# [-0.1966, -0.2476, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 50,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 11.0 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 4.62 tokens</li><li>max: 8 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------|:-----------------------------|
| <code>rgb(113,78,58)</code> | <code>Brown Beige</code> |
| <code>rgb(138,167,55)</code> | <code>Pistachio Green</code> |
| <code>rgb(201,3,46)</code> | <code>Fiery Red</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 10
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.16 | 500 | 2.8812 |
| 0.32 | 1000 | 1.6698 |
| 0.48 | 1500 | 1.1876 |
| 0.64 | 2000 | 1.0097 |
| 0.8 | 2500 | 0.9378 |
| 0.96 | 3000 | 0.8874 |
| 1.12 | 3500 | 0.8318 |
| 1.28 | 4000 | 0.8126 |
| 1.44 | 4500 | 0.7824 |
| 1.6 | 5000 | 0.7638 |
| 1.76 | 5500 | 0.7661 |
| 1.92 | 6000 | 0.7407 |
| 2.08 | 6500 | 0.7444 |
| 2.24 | 7000 | 0.7151 |
| 2.4 | 7500 | 0.7317 |
| 2.56 | 8000 | 0.6905 |
| 2.7200 | 8500 | 0.6977 |
| 2.88 | 9000 | 0.6934 |
| 3.04 | 9500 | 0.6843 |
| 3.2 | 10000 | 0.6874 |
| 3.36 | 10500 | 0.6563 |
| 3.52 | 11000 | 0.6687 |
| 3.68 | 11500 | 0.6551 |
| 3.84 | 12000 | 0.6615 |
| 4.0 | 12500 | 0.6544 |
| 4.16 | 13000 | 0.6487 |
| 4.32 | 13500 | 0.6309 |
| 4.48 | 14000 | 0.6406 |
| 4.64 | 14500 | 0.6414 |
| 4.8 | 15000 | 0.6547 |
| 4.96 | 15500 | 0.6434 |
| 5.12 | 16000 | 0.6251 |
| 5.28 | 16500 | 0.628 |
| 5.44 | 17000 | 0.6468 |
| 5.6 | 17500 | 0.6258 |
| 5.76 | 18000 | 0.6346 |
| 5.92 | 18500 | 0.6199 |
| 6.08 | 19000 | 0.6231 |
| 6.24 | 19500 | 0.6008 |
| 6.4 | 20000 | 0.6146 |
| 6.5600 | 20500 | 0.6261 |
| 6.72 | 21000 | 0.5964 |
| 6.88 | 21500 | 0.6168 |
| 7.04 | 22000 | 0.607 |
| 7.2 | 22500 | 0.5991 |
| 7.36 | 23000 | 0.6005 |
| 7.52 | 23500 | 0.6067 |
| 7.68 | 24000 | 0.604 |
| 7.84 | 24500 | 0.6039 |
| 8.0 | 25000 | 0.5969 |
| 8.16 | 25500 | 0.6001 |
| 8.32 | 26000 | 0.589 |
| 8.48 | 26500 | 0.5795 |
| 8.64 | 27000 | 0.5957 |
| 8.8 | 27500 | 0.5804 |
| 8.96 | 28000 | 0.6012 |
| 9.12 | 28500 | 0.5789 |
| 9.28 | 29000 | 0.5976 |
| 9.44 | 29500 | 0.6033 |
| 9.6 | 30000 | 0.5819 |
| 9.76 | 30500 | 0.5847 |
| 9.92 | 31000 | 0.5865 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.0.0
- Transformers: 4.53.1
- PyTorch: 2.7.1+cu126
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755529746
|
yaelahnal
| 2025-08-18T15:15:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:10:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755528375
|
sampingkaca72
| 2025-08-18T15:11:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:11:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoan17/saving_LOe3000s20_scratch_1600
|
hoan17
| 2025-08-18T15:11:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-18T15:10:35Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
WenFengg/swing27_14_31_8
|
WenFengg
| 2025-08-18T15:09:27Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-06T14:15:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jnjnkj/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_climbing_raven
|
jnjnkj
| 2025-08-18T15:08:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am amphibious_climbing_raven",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:47:55Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am amphibious_climbing_raven
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AIMindaeng/grpo-Qwen2.5-VL-3B-Instruct
|
AIMindaeng
| 2025-08-18T15:06:37Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T09:47:52Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: grpo-Qwen2.5-VL-3B-Instruct
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for grpo-Qwen2.5-VL-3B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AIMindaeng/grpo-Qwen2.5-VL-3B-Instruct", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
srijan150/myirc_finetuned_model
|
srijan150
| 2025-08-18T15:04:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:03:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover0_
|
neural-interactive-proofs
| 2025-08-18T15:03:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:02:15Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover0_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover0_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover0_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_15-32-56_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover0)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
moekh/Reports-OCR-Training
|
moekh
| 2025-08-18T15:02:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T10:25:49Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Reports-OCR-Training
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Reports-OCR-Training
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="moekh/Reports-OCR-Training", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/moekh-redf/Reports-OCR-H20/runs/ij335dot)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.53.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
paperboygold/gpt-oss-sanguine-20b-4bit-bnb
|
paperboygold
| 2025-08-18T15:01:26Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"quantized",
"gpt-oss",
"roleplay",
"consequence-based-alignment",
"en",
"zh",
"dataset:paperboygold/sanguine-dataset-v1",
"base_model:paperboygold/gpt-oss-sanguine-20b-v1",
"base_model:quantized:paperboygold/gpt-oss-sanguine-20b-v1",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-18T14:53:00Z |
---
license: mit
base_model: paperboygold/gpt-oss-sanguine-20b-v1
tags:
- quantized
- gpt-oss
- roleplay
- consequence-based-alignment
datasets:
- paperboygold/sanguine-dataset-v1
language:
- en
- zh
---
# sanguine-scribe-4bit-bnb
4-bit quantized version using BitsAndBytes for efficient GPU inference.
This is a quantized version of [gpt-oss-sanguine-20b-v1](https://huggingface.co/paperboygold/gpt-oss-sanguine-20b-v1), a consequence-based alignment model for character roleplay.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("paperboygold/sanguine-scribe-4bit-bnb")
model = AutoModelForCausalLM.from_pretrained(
"paperboygold/sanguine-scribe-4bit-bnb",
device_map="auto",
trust_remote_code=True
)
```
## Original Model
- **Base Model**: openai/gpt-oss-20b
- **Training Dataset**: [sanguine-dataset-v1](https://huggingface.co/datasets/paperboygold/sanguine-dataset-v1) (350K examples)
- **Training Loss**: 4.1 → 1.31 (500 steps)
|
ICTuniverse/unsloth-Qwen3-14B-bnb-4bit-finetuned
|
ICTuniverse
| 2025-08-18T15:00:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:59:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShihteSiao/Talkia_LoRA
|
ShihteSiao
| 2025-08-18T14:59:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T11:01:57Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShihteSiao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755527295
|
hakimjustbao
| 2025-08-18T14:57:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:57:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755527335
|
koloni
| 2025-08-18T14:57:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:57:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bankimds/blockassist-bc-padded_scented_otter_1755526546
|
bankimds
| 2025-08-18T14:57:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"padded scented otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:57:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- padded scented otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755528747
|
2hpsatt
| 2025-08-18T14:53:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:53:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alphateach/affine-4363576
|
alphateach
| 2025-08-18T14:53:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-18T14:53:08Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-120b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-120b
ollama pull gpt-oss:120b
ollama run gpt-oss:120b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-120b
lms get openai/gpt-oss-120b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-120b
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755526813
|
helmutsukocok
| 2025-08-18T14:48:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:48:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NeuralNovel/Gecko-7B-v0.1
|
NeuralNovel
| 2025-08-18T14:48:24Z | 763 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-15T23:09:30Z |
---
license: apache-2.0
library_name: transformers
base_model: mistralai/Mistral-7B-Instruct-v0.2
inference: false
model-index:
- name: Gecko-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Gecko-7B-v0.1
name: Open LLM Leaderboard
---

# Gecko-7B-v0.1
Designed to generate instructive and narrative text, with a focus on mathematics & numeracy.
Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license.
You may download and use this model for research, training and commercial purposes.
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>
### Data-set
The model was finetuned using the Neural-Mini-Math dataset (Currently Private)
### Summary
Fine-tuned with the intention of following all prompt directions, making it more suitable for roleplay and problem solving.
#### Out-of-Scope Use
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
### Bias, Risks, and Limitations
This model may not work as intended. As such all users are encouraged to use this model with caution and respect.
This model is for testing and research purposes only, it has reduced levels of alignment and as a result may produce NSFW or harmful content.
The user is responsible for their output and must use this model responsibly.
### Hardware and Training
```
n_epochs = 3,
n_checkpoints = 3,
batch_size = 12,
learning_rate = 1e-5,
```
*Sincere appreciation to Techmind for their generous sponsorship.*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.58|
|AI2 Reasoning Challenge (25-Shot)|61.35|
|HellaSwag (10-Shot) |83.36|
|MMLU (5-Shot) |61.05|
|TruthfulQA (0-shot) |62.60|
|Winogrande (5-shot) |77.58|
|GSM8k (5-shot) |41.55|
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755526954
|
quantumxnode
| 2025-08-18T14:48:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:48:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ReIceCream/HUAFENGzuowen3
|
ReIceCream
| 2025-08-18T14:47:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:02:59Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ReIceCream
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
doshirak11/blockassist-bc-slimy_amphibious_ape_1755528074
|
doshirak11
| 2025-08-18T14:42:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy amphibious ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:41:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy amphibious ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
johngreendr1/2832e775-a53d-465f-a440-79878d910ce7
|
johngreendr1
| 2025-08-18T14:34:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2025-08-18T11:26:04Z |
---
base_model: HuggingFaceH4/zephyr-7b-beta
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
aiface/phobert-large_nli
|
aiface
| 2025-08-18T14:31:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-large",
"base_model:finetune:vinai/phobert-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T11:25:21Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: phobert-large_nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-large_nli
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3062
- Accuracy: 0.8102
- Precision Macro: 0.8106
- Recall Macro: 0.8103
- F1 Macro: 0.8103
- F1 Weighted: 0.8103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 1.0976 | 1.0 | 72 | 1.0257 | 0.5237 | 0.5529 | 0.5264 | 0.5082 | 0.5072 |
| 0.9271 | 2.0 | 144 | 0.6649 | 0.7592 | 0.7887 | 0.7579 | 0.7590 | 0.7590 |
| 0.4037 | 3.0 | 216 | 0.5864 | 0.7894 | 0.7930 | 0.7895 | 0.7895 | 0.7895 |
| 0.2866 | 4.0 | 288 | 0.6385 | 0.8120 | 0.8142 | 0.8125 | 0.8118 | 0.8118 |
| 0.1197 | 5.0 | 360 | 0.6949 | 0.8115 | 0.8117 | 0.8115 | 0.8115 | 0.8115 |
| 0.0939 | 6.0 | 432 | 0.7485 | 0.8058 | 0.8084 | 0.8060 | 0.8058 | 0.8059 |
| 0.0647 | 7.0 | 504 | 0.9244 | 0.7920 | 0.7977 | 0.7921 | 0.7919 | 0.7918 |
| 0.0457 | 8.0 | 576 | 0.8464 | 0.8106 | 0.8107 | 0.8107 | 0.8106 | 0.8106 |
| 0.046 | 9.0 | 648 | 0.9886 | 0.8062 | 0.8121 | 0.8066 | 0.8064 | 0.8063 |
| 0.026 | 10.0 | 720 | 0.9887 | 0.8120 | 0.8126 | 0.8121 | 0.8120 | 0.8121 |
| 0.0244 | 11.0 | 792 | 1.0642 | 0.8124 | 0.8130 | 0.8126 | 0.8125 | 0.8125 |
| 0.0211 | 12.0 | 864 | 1.0197 | 0.8075 | 0.8097 | 0.8078 | 0.8077 | 0.8077 |
| 0.0146 | 13.0 | 936 | 1.1487 | 0.8151 | 0.8171 | 0.8155 | 0.8151 | 0.8151 |
| 0.0085 | 14.0 | 1008 | 1.1846 | 0.8053 | 0.8056 | 0.8053 | 0.8053 | 0.8053 |
| 0.0051 | 15.0 | 1080 | 1.2905 | 0.8084 | 0.8095 | 0.8085 | 0.8084 | 0.8084 |
| 0.0036 | 16.0 | 1152 | 1.3259 | 0.8102 | 0.8121 | 0.8104 | 0.8104 | 0.8104 |
| 0.0027 | 17.0 | 1224 | 1.3187 | 0.8115 | 0.8121 | 0.8115 | 0.8116 | 0.8116 |
| 0.0023 | 18.0 | 1296 | 1.3024 | 0.8115 | 0.8120 | 0.8117 | 0.8116 | 0.8116 |
| 0.0025 | 19.0 | 1368 | 1.3049 | 0.8111 | 0.8115 | 0.8112 | 0.8111 | 0.8111 |
| 0.0037 | 20.0 | 1440 | 1.3062 | 0.8102 | 0.8106 | 0.8103 | 0.8103 | 0.8103 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mradermacher/Lacaille-MoT-4B-Supreme2-GGUF
|
mradermacher
| 2025-08-18T14:25:32Z | 2,173 | 1 |
transformers
|
[
"transformers",
"gguf",
"moe",
"trl",
"thinking=1",
"mot",
"code",
"science",
"math",
"mixture-of-thoughts",
"supreme2",
"stem",
"text-generation-inference",
"reasoning",
"en",
"zh",
"dataset:open-r1/Mixture-of-Thoughts",
"dataset:nvidia/OpenCodeReasoning",
"base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-02T09:02:10Z |
---
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
datasets:
- open-r1/Mixture-of-Thoughts
- nvidia/OpenCodeReasoning
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- trl
- thinking=1
- mot
- code
- science
- math
- mixture-of-thoughts
- supreme2
- stem
- text-generation-inference
- reasoning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
koloni/blockassist-bc-deadly_graceful_stingray_1755525398
|
koloni
| 2025-08-18T14:25:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:25:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kenil-patel-183/MNIST_classification_using_CNN
|
kenil-patel-183
| 2025-08-18T14:24:12Z | 0 | 0 | null |
[
"mnist",
"pytorch",
"DL",
"CNN",
"image-classification",
"en",
"license:mit",
"region:us"
] |
image-classification
| 2025-08-17T12:47:43Z |
---
language: en
license: mit
tags:
- mnist
- pytorch
- DL
- CNN
pipeline_tag: image-classification
---
# MNIST Classification using CNN
This model predicts digits (0-9) from the MNIST dataset using a custom PyTorch CNN.
# MNIST CNN Model
A PyTorch CNN model trained on MNIST, deployed on Hugging Face for inference.
## How to Use
You can use this model with the Hugging Face Inference API:
```bash
curl https://api-inference.huggingface.co/models/<username>/<model-name> \
-H "Authorization: Bearer YOUR_HF_TOKEN" \
-d '{"inputs": {"image": "<image_url>"}}'
|
TuKoResearch/AuriStream1B_40Pred_librilight_500k
|
TuKoResearch
| 2025-08-18T14:23:13Z | 180 | 0 |
transformers
|
[
"transformers",
"safetensors",
"AuriStream.AuriStream",
"feature-extraction",
"audio",
"speech",
"autoregressive",
"custom_code",
"en",
"dataset:LibriLight",
"arxiv:2508.11598",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2025-05-03T21:45:41Z |
---
language:
- en
library_name: transformers
pipeline_tag: feature-extraction
tags:
- audio
- speech
- autoregressive
- transformers
- custom_code
datasets:
- LibriLight
license: apache-2.0
pretty_name: AuriStream-1B (40-pred)
---
# AuriStream-1B
**AuriStream** is a biologically-inspired, GPT-style autoregressive Transformer trained to predict **cochlear tokens** - discrete codes produced by a companion “WavCoch” tokenizer over long speech contexts (through **transofmration imitation**). Auristream utilizes a long context window of (\~20 s, \~4096 tokens) and is trained on **LibriLight (\~60k h)** for **\~500k steps**. It learns rich, time‑aligned representations (useful for linear probing) and can roll out future tokens to generate **speech continuations**. Inputs are **token IDs**; use it with a WavCoch quantizer for audio->tokens and with the built in vocoder for tokens->audio.
---
## Installation
```bash
pip install -U torch torchaudio transformers
```
This model uses custom code; when loading from Hugging Face, pass `trust_remote_code=True`.
---
## Use Case 1) get hidden‑state embeddings from a WAV
```python
import torch, torchaudio
from transformers import AutoModel
device = "cuda" if torch.cuda.is_available() else "cpu"
# 1) Load the WavCoch tokenizer (audio -> token IDs)
quantizer = AutoModel.from_pretrained(
"TuKoResearch/WavCochV8192", trust_remote_code=True
).to(device).eval()
# 2) Load the AuriStream LM (tokens -> hidden states / next-token preds)
lm = AutoModel.from_pretrained(
"TuKoResearch/AuriStream1B_40Pred_librilight_500k", trust_remote_code=True
).to(device).eval()
# 3) Read an audio file (mono, 16 kHz recommended)
wav, sr = torchaudio.load("sample.wav")
if wav.size(0) > 1: # stereo -> mono
wav = wav.mean(dim=0, keepdim=True)
if sr != 16_000:
wav = torchaudio.transforms.Resample(sr, 16_000)(wav)
sr = 16_000
# 4) Quantize to cochlear token IDs
with torch.no_grad():
# quantizer.quantize expects (B, T); returns LongTensor (B, L)
token_ids = quantizer.quantize(wav.unsqueeze(0).to(device)) # (1, L)
# 5) Forward pass with hidden states
with torch.no_grad():
out = lm(token_ids, output_hidden_states=True)
last_layer = out["hidden_states"][-1] # (1, T, D)
clip_embedding = last_layer.mean(dim=1) # time mean-pool -> (1, D)
print("Pooled embedding shape:", clip_embedding.shape)
```
**Notes**
* `output_hidden_states=True` returns all layers; choose a layer or pool over time.
* For word/phone segments, slice the time axis before pooling.
---
## Use Case 2) generate a speech continuation (token rollout)
```python
import torch, torchaudio
from transformers import AutoModel
device = "cuda" if torch.cuda.is_available() else "cpu"
# WavCoch tokenizer (audio->tokens, tokens->cochleagram->audio)
quantizer = AutoModel.from_pretrained(
"TuKoResearch/WavCochV8192", trust_remote_code=True
).to(device).eval()
# AuriStream LM (tokens->next tokens)
lm = AutoModel.from_pretrained(
"TuKoResearch/AuriStream1B_40Pred_librilight_500k", trust_remote_code=True
).to(device).eval()
# Load & prep a short prompt (e.g., 3s of audio at 16 kHz)
wav, sr = torchaudio.load("prompt.wav")
if wav.size(0) > 1:
wav = wav.mean(dim=0, keepdim=True)
if sr != 16_000:
wav = torchaudio.transforms.Resample(sr, 16_000)(wav)
sr = 16_000
prompt_seconds = 3
wav = wav[:, : sr * prompt_seconds]
# Quantize prompt to token IDs
with torch.no_grad():
prompt_tokens = quantizer.quantize(wav.unsqueeze(0).to(device)) # (1, L)
# Decide how many future tokens to generate
tokens_per_sec = prompt_tokens.size(1) / float(prompt_seconds)
rollout_seconds = 3
rollout_steps = int(round(tokens_per_sec * rollout_seconds))
# Roll out future tokens
with torch.no_grad():
# returns (pred_tokens, pred_logits); temperature/top_k/top_p/seed optional
pred_tokens, _ = lm.generate(
prompt_tokens, rollout_steps, temp=0.7, top_k=50, top_p=0.95, seed=0
)
full_tokens = torch.cat([prompt_tokens, pred_tokens], dim=1) # (1, L+K)
```
---
## Citation
If you use this model, please cite:
```bibtex
@misc{tuckute2025cochleartokens,
title = {Representing Speech Through Autoregressive Prediction of Cochlear Tokens},
author = {Tuckute, Greta and Kotar, Klemen and Fedorenko, Evelina and Yamins, Daniel L. K.},
year = {2025},
eprint = {2508.11598},
archivePrefix = {arXiv},
url = {https://arxiv.org/abs/2508.11598}
}
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755525187
|
vwzyrraz7l
| 2025-08-18T14:22:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:22:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755525303
|
unitova
| 2025-08-18T14:20:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:20:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755525241
|
quantumxnode
| 2025-08-18T14:19:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:19:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WaiLwin/topology_results
|
WaiLwin
| 2025-08-18T14:18:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T14:17:52Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: topology_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topology_results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
- Accuracy: 0.9977
- F1: 0.9977
- Precision: 0.9977
- Recall: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0003 | 1.0 | 1004 | 0.0130 | 0.9977 | 0.9977 | 0.9977 | 0.9977 |
| 0.0001 | 2.0 | 2008 | 0.0158 | 0.9965 | 0.9965 | 0.9965 | 0.9965 |
| 0.0001 | 3.0 | 3012 | 0.0036 | 0.9988 | 0.9988 | 0.9988 | 0.9988 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Flo0620/Qwen2_5_7B_r256_a256_d0_2_CombinedOhneTestSplits
|
Flo0620
| 2025-08-18T14:17:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-16T12:14:54Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r256_a256_d0_2_CombinedOhneTestSplits
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r256_a256_d0_2_CombinedOhneTestSplits
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r256_a256_d0_2_CombinedOhneTestSplits", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Azurastar2903/Qwen2.5-3B-rk3588-1.1.2
|
Azurastar2903
| 2025-08-18T14:17:14Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:other",
"region:us"
] |
text-generation
| 2025-08-18T11:47:32Z |
---
language:
- en
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen2.5-3B-RK3588-1.1.2
This version of Qwen2.5-3B has been converted to run on the RK3588 NPU using w8a8_g512 quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.1.2
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, Qwen2.5-3B, below:
# Qwen2.5-3B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
Darshan57/gemma1b_18_aug_v2
|
Darshan57
| 2025-08-18T14:16:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:14:21Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma1b_18_aug_v2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma1b_18_aug_v2
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Darshan57/gemma1b_18_aug_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.1.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/ARC-Easy_Llama-3.2-1B-nwf15c6a
|
donoway
| 2025-08-18T14:16:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:00:43Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-nwf15c6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-nwf15c6a
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6837
- Model Preparation Time: 0.0056
- Mdl: 562.2638
- Accumulated Loss: 389.7316
- Correct Preds: 444.0
- Total Preds: 570.0
- Accuracy: 0.7789
- Correct Gen Preds: 0.0
- Gen Accuracy: 0.0
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 119.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7532
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 0.0
- Correct Preds 33: 126.0
- Total Labels 33: 152.0
- Accuracy 33: 0.8289
- Gen Accuracy 33: 0.0
- Correct Gen Preds 34: 0.0
- Correct Preds 34: 111.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7817
- Gen Accuracy 34: 0.0
- Correct Gen Preds 35: 0.0
- Correct Preds 35: 88.0
- Total Labels 35: 118.0
- Accuracy 35: 0.7458
- Gen Accuracy 35: 0.0
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0056 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4429 | 1.0 | 36 | 0.6869 | 0.0056 | 564.8439 | 391.5200 | 430.0 | 570.0 | 0.7544 | 60.0 | 0.1053 | 0.0 | 130.0 | 158.0 | 0.8228 | 0.0 | 10.0 | 113.0 | 152.0 | 0.7434 | 0.0658 | 43.0 | 108.0 | 142.0 | 0.7606 | 0.3028 | 7.0 | 79.0 | 118.0 | 0.6695 | 0.0593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1294 | 2.0 | 72 | 0.6837 | 0.0056 | 562.2638 | 389.7316 | 444.0 | 570.0 | 0.7789 | 0.0 | 0.0 | 0.0 | 119.0 | 158.0 | 0.7532 | 0.0 | 0.0 | 126.0 | 152.0 | 0.8289 | 0.0 | 0.0 | 111.0 | 142.0 | 0.7817 | 0.0 | 0.0 | 88.0 | 118.0 | 0.7458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0423 | 3.0 | 108 | 0.9005 | 0.0056 | 740.5548 | 513.3135 | 418.0 | 570.0 | 0.7333 | 411.0 | 0.7211 | 110.0 | 116.0 | 158.0 | 0.7342 | 0.6962 | 128.0 | 128.0 | 152.0 | 0.8421 | 0.8421 | 102.0 | 103.0 | 142.0 | 0.7254 | 0.7183 | 71.0 | 71.0 | 118.0 | 0.6017 | 0.6017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0101 | 4.0 | 144 | 1.1974 | 0.0056 | 984.6596 | 682.5140 | 429.0 | 570.0 | 0.7526 | 111.0 | 0.1947 | 0.0 | 117.0 | 158.0 | 0.7405 | 0.0 | 49.0 | 124.0 | 152.0 | 0.8158 | 0.3224 | 59.0 | 109.0 | 142.0 | 0.7676 | 0.4155 | 3.0 | 79.0 | 118.0 | 0.6695 | 0.0254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0022 | 5.0 | 180 | 1.9793 | 0.0056 | 1627.6320 | 1128.1885 | 428.0 | 570.0 | 0.7509 | 384.0 | 0.6737 | 85.0 | 112.0 | 158.0 | 0.7089 | 0.5380 | 118.0 | 119.0 | 152.0 | 0.7829 | 0.7763 | 109.0 | 109.0 | 142.0 | 0.7676 | 0.7676 | 72.0 | 88.0 | 118.0 | 0.7458 | 0.6102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0004 | 6.0 | 216 | 2.1635 | 0.0056 | 1779.1008 | 1233.1787 | 440.0 | 570.0 | 0.7719 | 236.0 | 0.4140 | 17.0 | 126.0 | 158.0 | 0.7975 | 0.1076 | 79.0 | 118.0 | 152.0 | 0.7763 | 0.5197 | 92.0 | 112.0 | 142.0 | 0.7887 | 0.6479 | 48.0 | 84.0 | 118.0 | 0.7119 | 0.4068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 7.0 | 252 | 2.1693 | 0.0056 | 1783.8703 | 1236.4847 | 421.0 | 570.0 | 0.7386 | 266.0 | 0.4667 | 11.0 | 102.0 | 158.0 | 0.6456 | 0.0696 | 104.0 | 122.0 | 152.0 | 0.8026 | 0.6842 | 108.0 | 112.0 | 142.0 | 0.7887 | 0.7606 | 43.0 | 85.0 | 118.0 | 0.7203 | 0.3644 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 8.0 | 288 | 2.0189 | 0.0056 | 1660.2508 | 1150.7982 | 434.0 | 570.0 | 0.7614 | 225.0 | 0.3947 | 2.0 | 119.0 | 158.0 | 0.7532 | 0.0127 | 80.0 | 118.0 | 152.0 | 0.7763 | 0.5263 | 106.0 | 114.0 | 142.0 | 0.8028 | 0.7465 | 37.0 | 83.0 | 118.0 | 0.7034 | 0.3136 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0007 | 9.0 | 324 | 2.0142 | 0.0056 | 1656.3598 | 1148.1011 | 433.0 | 570.0 | 0.7596 | 197.0 | 0.3456 | 0.0 | 113.0 | 158.0 | 0.7152 | 0.0 | 66.0 | 123.0 | 152.0 | 0.8092 | 0.4342 | 107.0 | 114.0 | 142.0 | 0.8028 | 0.7535 | 24.0 | 83.0 | 118.0 | 0.7034 | 0.2034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 360 | 1.9393 | 0.0056 | 1594.7939 | 1105.4269 | 433.0 | 570.0 | 0.7596 | 169.0 | 0.2965 | 1.0 | 129.0 | 158.0 | 0.8165 | 0.0063 | 56.0 | 121.0 | 152.0 | 0.7961 | 0.3684 | 102.0 | 109.0 | 142.0 | 0.7676 | 0.7183 | 10.0 | 74.0 | 118.0 | 0.6271 | 0.0847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 396 | 1.9981 | 0.0056 | 1643.0903 | 1138.9034 | 437.0 | 570.0 | 0.7667 | 205.0 | 0.3596 | 3.0 | 124.0 | 158.0 | 0.7848 | 0.0190 | 69.0 | 119.0 | 152.0 | 0.7829 | 0.4539 | 109.0 | 113.0 | 142.0 | 0.7958 | 0.7676 | 24.0 | 81.0 | 118.0 | 0.6864 | 0.2034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 12.0 | 432 | 2.0404 | 0.0056 | 1677.8660 | 1163.0081 | 439.0 | 570.0 | 0.7702 | 213.0 | 0.3737 | 4.0 | 126.0 | 158.0 | 0.7975 | 0.0253 | 72.0 | 119.0 | 152.0 | 0.7829 | 0.4737 | 109.0 | 113.0 | 142.0 | 0.7958 | 0.7676 | 28.0 | 81.0 | 118.0 | 0.6864 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Azurastar2903/Qwen2.5-0.5B-rk3588-1.1.2
|
Azurastar2903
| 2025-08-18T14:15:14Z | 0 | 0 |
transformers
|
[
"transformers",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:04:57Z |
---
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen2.5-0.5B-RK3588-1.1.2
This version of Qwen2.5-0.5B has been converted to run on the RK3588 NPU using w8a8_g128 quantization.
This model has been optimized with the following LoRA:
Compatible with RKLLM version: 1.1.2
## Useful links:
[Official RKLLM GitHub](https://github.com/airockchip/rknn-llm)
[RockhipNPU Reddit](https://reddit.com/r/RockchipNPU)
[EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/)
Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531)
Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit
# Original Model Card for base model, Qwen2.5-0.5B, below:
# Qwen2.5-0.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755524967
|
helmutsukocok
| 2025-08-18T14:15:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:15:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rishi790/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_snorting_mongoose
|
Rishi790
| 2025-08-18T14:15:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am colorful_snorting_mongoose",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T17:01:25Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am colorful_snorting_mongoose
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o
|
BootesVoid
| 2025-08-18T14:09:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T14:09:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: RUBIO0808
---
# Cmeh250Ii0O6Srts8J6939U0N_Cmeh53Idu0Objrts8Tgpz0O5O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `RUBIO0808` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "RUBIO0808",
"lora_weights": "https://huggingface.co/BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o', weight_name='lora.safetensors')
image = pipeline('RUBIO0808').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o/discussions) to add images that show off what you’ve made with this LoRA.
|
boquila/speciesnet
|
boquila
| 2025-08-18T14:08:56Z | 12 | 0 | null |
[
"region:us"
] | null | 2025-07-08T12:49:25Z |
---
{}
---
always_crop_99710272_22x8_v12_epoch_00148 -> SpeciesNet4.0.0a
full_image_88545560_22x8_v12_epoch_00153 -> SpeciesNet4.0.0b
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755524309
|
indoempatnol
| 2025-08-18T14:08:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:08:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jinhyeook/Llama-3.2-1B-Instruct-SFT-Study
|
jinhyeook
| 2025-08-18T14:06:24Z | 0 | 0 | null |
[
"safetensors",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-08-14T13:15:44Z |
---
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
### OVERVIEW
This model is for my sft study. <br>
Base Model: meta-llama/Llama-3.2-1B-Instruct <br>
Fine-tuning Method: Supervised Fine-Tuning (SFT) <br>
Tuning Technique: LoRA <br>
Training Framework: LLaMA Factory
|
JackRoz/Phi-4-edward-finetuned-adapter-only
|
JackRoz
| 2025-08-18T14:05:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:JackRoz/Phi-4-edward-merged",
"base_model:finetune:JackRoz/Phi-4-edward-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T11:16:35Z |
---
base_model: JackRoz/Phi-4-edward-merged
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JackRoz
- **License:** apache-2.0
- **Finetuned from model :** JackRoz/Phi-4-edward-merged
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755524272
|
thanobidex
| 2025-08-18T14:04:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:04:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755525811
|
Vasya777
| 2025-08-18T14:04:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:04:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755524332
|
lisaozill03
| 2025-08-18T14:03:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:03:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bigdefence/Bigvox-Exaone4-Audio
|
bigdefence
| 2025-08-18T14:02:29Z | 0 | 0 | null |
[
"safetensors",
"omni_speech_exaone",
"speech-to-text",
"korean",
"audio",
"voice",
"bigdefence",
"EXAONE",
"LG",
"audio-text-to-text",
"ko",
"base_model:LGAI-EXAONE/EXAONE-4.0-1.2B",
"base_model:finetune:LGAI-EXAONE/EXAONE-4.0-1.2B",
"license:apache-2.0",
"region:us"
] |
audio-text-to-text
| 2025-08-18T02:32:56Z |
---
license: apache-2.0
language:
- ko
base_model:
- LGAI-EXAONE/EXAONE-4.0-1.2B
tags:
- speech-to-text
- korean
- audio
- voice
- bigdefence
- EXAONE
- LG
pipeline_tag: audio-text-to-text
---
## 🎧 Bigvox
- **Bigvox**은 한국어 음성 인식에 특화된 고성능, 저지연 음성 언어 멀티모달 모델입니다. [LGAI-EXAONE/EXAONE-4.0-1.2B](https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-1.2B) 기반으로 구축되었습니다. 🚀
- **End-to-End** 음성 멀티모달 구조를 채택하여 음성 입력부터 텍스트 출력까지 하나의 파이프라인에서 처리하며, 추가적인 중간 모델 없이 자연스럽게 멀티모달 처리를 지원합니다.

### 📂 모델 접근
- **GitHub**: [bigdefence/bigvox-exaone](https://github.com/bigdefence/bigvox-exaone) 🌐
- **HuggingFace**: [bigdefence/Bigvox-Exaone4-Audio](https://huggingface.co/bigdefence/Bigvox-Exaone4-Audio) 🤗
- **모델 크기**: 2B 파라미터 📊
## 🌟 주요 특징
- **🇰🇷 한국어 특화**: 한국어 음성 패턴과 언어적 특성에 최적화
- **⚡ 경량화**: 2B 파라미터로 효율적인 추론 성능
- **🎯 고정확도**: 다양한 한국어 음성 환경에서 우수한 성능
- **🔧 실용성**: 실시간 음성 인식 애플리케이션에 적합
## 📋 모델 정보
| 항목 | 세부사항 |
|------|----------|
| **기반 모델** | LGAI-EXAONE/EXAONE-4.0-1.2B |
| **언어** | 한국어 (Korean) |
| **모델 크기** | ~2B 파라미터 |
| **작업 유형** | Speech-to-Text 음성 멀티모달 |
| **라이선스** | Apache 2.0 |
### 🔧 레포지토리 다운로드 및 환경 설정
**Bigvox**을 시작하려면 다음과 같이 레포지토리를 클론하고 환경을 설정하세요. 🛠️
1. **레포지토리 클론**:
```bash
git clone https://github.com/bigdefence/bigvox-exaone
cd bigvox-exaone
```
2. **의존성 설치**:
```bash
bash setting.sh
```
### 📥 다운로드 방법
**Huggingface CLI 사용**:
```bash
pip install -U huggingface_hub
huggingface-cli download bigdefence/Bigvox-Exaone4-Audio --local-dir ./checkpoints
```
**Snapshot Download 사용**:
```bash
pip install -U huggingface_hub
```
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="bigdefence/Bigvox-Exaone4-Audio",
local_dir="./checkpoints",
resume_download=True
)
```
**Git 사용**:
```bash
git lfs install
git clone https://huggingface.co/bigdefence/Bigvox-Exaone4-Audio
```
### 🛠️ 의존성 모델
- **Speech Encoder**: [Whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) 🎤
### 🔄 로컬 추론
**Bigvox**으로 추론을 수행하려면 다음 단계를 따라 모델을 설정하고 로컬에서 실행하세요. 📡
1. **모델 준비**:
- [HuggingFace](https://huggingface.co/bigdefence/Bigvox-Exaone4-Audio)에서 **Bigvox** 다운로드 📦
- [HuggingFace](https://huggingface.co/openai/whisper-large-v3)에서 **Whisper-large-v3** 음성 인코더를 다운로드하여 `./models/speech_encoder/` 디렉토리에 배치 🎤
2. **추론 실행**:
- **음성-텍스트(S2T)** 추론:<br>
- **Non-streaming**
```bash
python3 omni_speech/infer/bigvox.py --query_audio test_audio.wav
```
- **Streaming**
```bash
python3 omni_speech/infer/bigvox_streaming.py --query_audio test_audio.wav
```
## 🔧 훈련 세부사항
### 데이터셋
- **VoiceAssistant**: 한국어 대화 음성 데이터
### 훈련 설정
- **Base Model**: LGAI-EXAONE/EXAONE-4.0-1.2B
- **Hardware**: 1x NVIDIA RTX 6000A GPU
- **Training Time**: 4시간
## ⚠️ 제한사항
- 배경 소음이 심한 환경에서는 성능이 저하될 수 있습니다
- 매우 빠른 발화나 중얼거리는 말투에 대해서는 인식률이 떨어질 수 있습니다
- 전문 용어나 고유명사에 대한 인식률은 도메인에 따라 차이가 있을 수 있습니다
## 📞 문의사항
- **개발**: BigDefence
## 📈 업데이트 로그
### v1.0.0 (2024.12)
- 🎉 **초기 모델 릴리즈**: Bigvox 공개
- 🇰🇷 **한국어 특화**: LGAI-EXAONE/EXAONE-4.0-1.2B 기반 한국어 음성-텍스트 음성 멀티모달 모델
---
## 🤝 기여하기
**Bigvox** 프로젝트에 기여하고 싶으시다면:
---
**BigDefence**와 함께 한국어 AI 음성 인식의 미래를 만들어가세요! 🚀🇰🇷
*"Every voice matters, every word counts - 모든 목소리가 중요하고, 모든 말이 가치 있습니다"*
|
tencent/Hunyuan3D-2
|
tencent
| 2025-08-18T14:01:44Z | 117,943 | 1,572 |
hunyuan3d-2
|
[
"hunyuan3d-2",
"diffusers",
"safetensors",
"image-to-3d",
"text-to-3d",
"en",
"zh",
"arxiv:2501.12202",
"arxiv:2411.02293",
"license:other",
"region:us"
] |
image-to-3d
| 2025-01-20T06:55:37Z |
---
library_name: hunyuan3d-2
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/tencent/Hunyuan3D-2/blob/main/LICENSE.txt
language:
- en
- zh
tags:
- image-to-3d
- text-to-3d
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<p align="center">
<img src="./assets/images/teaser.jpg">
</p>
<div align="center">
<a href=https://3d.hunyuan.tencent.com target="_blank"><img src=https://img.shields.io/badge/Hunyuan3D-black.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/spaces/tencent/Hunyuan3D-2 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Demo-276cb4.svg height=22px></a>
<a href=https://huggingface.co/tencent/Hunyuan3D-2 target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
<a href=https://3d-models.hunyuan.tencent.com/ target="_blank"><img src= https://img.shields.io/badge/Page-bb8a2e.svg?logo=github height=22px></a>
<a href=https://discord.gg/GuaWYwzKbX target="_blank"><img src= https://img.shields.io/badge/Discord-white.svg?logo=discord height=22px></a>
<a href=https://github.com/Tencent/Hunyuan3D-2/blob/main/assets/report/Tencent_Hunyuan3D_2_0.pdf target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
</div>
[//]: # ( <a href=# target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>)
[//]: # ( <a href=# target="_blank"><img src= https://img.shields.io/badge/Colab-8f2628.svg?logo=googlecolab height=22px></a>)
[//]: # ( <a href="#"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/v/mulankit?logo=pypi" height=22px></a>)
<br>
<p align="center">
“ Living out everyone’s imagination on creating and manipulating 3D assets.”
</p>
This repository contains the models of the paper [Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation](https://huggingface.co/papers/2501.12202).
For code and more details on how to use it, refer to the [Github repository](https://github.com/Tencent/Hunyuan3D-2).
## 🔥 News
- Jan 21, 2025: 💬 Release [Hunyuan3D 2.0](https://huggingface.co/spaces/tencent/Hunyuan3D-2). Please give it a try!
## **Abstract**
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets.
This system includes two foundation components: a large-scale shape generation model - Hunyuan3D-DiT, and a large-scale
texture synthesis model - Hunyuan3D-Paint.
The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly
aligns with a given condition image, laying a solid foundation for downstream applications.
The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant
texture maps for either generated or hand-crafted meshes.
Furthermore, we build Hunyuan3D-Studio - a versatile, user-friendly production platform that simplifies the re-creation
process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes
efficiently.
We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models,
including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and
e.t.c.
<p align="center">
<img src="assets/images/system.jpg">
</p>
## ☯️ **Hunyuan3D 2.0**
### Architecture
Hunyuan3D 2.0 features a two-stage generation pipeline, starting with the creation of a bare mesh, followed by the
synthesis of a texture map for that mesh. This strategy is effective for decoupling the difficulties of shape and
texture generation and also provides flexibility for texturing either generated or handcrafted meshes.
<p align="left">
<img src="assets/images/arch.jpg">
</p>
### Performance
We have evaluated Hunyuan3D 2.0 with other open-source as well as close-source 3d-generation methods.
The numerical results indicate that Hunyuan3D 2.0 surpasses all baselines in the quality of generated textured 3D assets
and the condition following ability.
| Model | CMMD(⬇) | FID_CLIP(⬇) | FID(⬇) | CLIP-score(⬆) |
|-------------------------|-----------|-------------|-------------|---------------|
| Top Open-source Model1 | 3.591 | 54.639 | 289.287 | 0.787 |
| Top Close-source Model1 | 3.600 | 55.866 | 305.922 | 0.779 |
| Top Close-source Model2 | 3.368 | 49.744 | 294.628 | 0.806 |
| Top Close-source Model3 | 3.218 | 51.574 | 295.691 | 0.799 |
| Hunyuan3D 2.0 | **3.193** | **49.165** | **282.429** | **0.809** |
Generation results of Hunyuan3D 2.0:
<p align="left">
<img src="assets/images/e2e-1.gif" height=300>
<img src="assets/images/e2e-2.gif" height=300>
</p>
### Pretrained Models
| Model | Date | Huggingface |
|----------------------|------------|--------------------------------------------------------|
| Hunyuan3D-DiT-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) |
| Hunyuan3D-Paint-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2) |
| Hunyuan3D-Delight-v2-0 | 2025-01-21 | [Download](https://huggingface.co/tencent/Hunyuan3D-2/tree/main/hunyuan3d-delight-v2-0) |
## 🤗 Get Started with Hunyuan3D 2.0
You may follow the next steps to use Hunyuan3D 2.0 via code or the Gradio App.
### Install Requirements
Please install Pytorch via the [official](https://pytorch.org/) site. Then install the other requirements via
```bash
pip install -r requirements.txt
# for texture
cd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
cd ../../..
cd hy3dgen/texgen/differentiable_renderer
bash compile_mesh_painter.sh OR python3 setup.py install (on Windows)
```
### API Usage
We designed a diffusers-like API to use our shape generation model - Hunyuan3D-DiT and texture synthesis model -
Hunyuan3D-Paint.
You could assess **Hunyuan3D-DiT** via:
```python
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]
```
The output mesh is a [trimesh object](https://trimesh.org/trimesh.html), which you could save to glb/obj (or other
format) file.
For **Hunyuan3D-Paint**, do the following:
```python
from hy3dgen.texgen import Hunyuan3DPaintPipeline
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
# let's generate a mesh first
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]
pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(mesh, image='assets/demo.png')
```
Please visit [minimal_demo.py](https://github.com/Tencent/Hunyuan3D-2/blob/main/minimal_demo.py) for more advanced usage, such as **text to 3D** and **texture generation
for handcrafted mesh**.
### Gradio App
You could also host a [Gradio](https://www.gradio.app/) App in your own computer via:
```bash
pip3 install gradio==3.39.0
python3 gradio_app.py
```
Don't forget to visit [Hunyuan3D](https://3d.hunyuan.tencent.com) for quick use, if you don't want to host yourself.
## 📑 Open-Source Plan
- [x] Inference Code
- [x] Model Checkpoints
- [x] Technical Report
- [ ] ComfyUI
- [ ] TensorRT Version
## 🔗 BibTeX
If you found this repository helpful, please cite our report:
```bibtex
@misc{hunyuan3d22025tencent,
title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
author={Tencent Hunyuan3D Team},
year={2025},
eprint={2501.12202},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Tencent Hunyuan3D Team},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Community Resources
Thanks for the contributions of community members, here we have these great extensions of Hunyuan3D 2.0:
- [ComfyUI-Hunyuan3DWrapper](https://github.com/kijai/ComfyUI-Hunyuan3DWrapper)
- [Hunyuan3D-2-for-windows](https://github.com/sdbds/Hunyuan3D-2-for-windows)
- [📦 A bundle for running on Windows | 整合包](https://github.com/YanWenKun/Comfy3D-WinPortable/releases/tag/r8-hunyuan3d2)
## Acknowledgements
We would like to thank the contributors to
the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers)
and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
## Star History
<a href="https://star-history.com/#Tencent/Hunyuan3D-2&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=Tencent/Hunyuan3D-2&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=Tencent/Hunyuan3D-2&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=Tencent/Hunyuan3D-2&type=Date" />
</picture>
</a>
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755524024
|
michaelcpage345
| 2025-08-18T14:00:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:00:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taajzer/ruben
|
taajzer
| 2025-08-18T14:00:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T13:46:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rubenai
---
# Ruben
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rubenai` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "rubenai",
"lora_weights": "https://huggingface.co/taajzer/ruben/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('taajzer/ruben', weight_name='lora.safetensors')
image = pipeline('rubenai').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/taajzer/ruben/discussions) to add images that show off what you’ve made with this LoRA.
|
tencent/Hunyuan3D-2mv
|
tencent
| 2025-08-18T14:00:26Z | 3,303 | 384 |
hunyuan3d-2
|
[
"hunyuan3d-2",
"image-to-3d",
"text-to-3d",
"en",
"zh",
"arxiv:2501.12202",
"arxiv:2411.02293",
"license:other",
"region:us"
] |
image-to-3d
| 2025-03-12T11:36:17Z |
---
library_name: hunyuan3d-2
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/tencent/Hunyuan3D-2/blob/main/LICENSE.txt
language:
- en
- zh
tags:
- image-to-3d
- text-to-3d
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<p align="center">
<img src="https://huggingface.co/tencent/Hunyuan3D-2/resolve/main/assets/images/teaser.jpg">
</p>
<div align="center">
<a href=https://3d.hunyuan.tencent.com target="_blank"><img src=https://img.shields.io/badge/Hunyuan3D-black.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/spaces/tencent/Hunyuan3D-2mv target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Demo-276cb4.svg height=22px></a>
<a href=https://huggingface.co/tencent/Hunyuan3D-2mv target="_blank"><img src=https://img.shields.io/badge/%F0%9F%A4%97%20Models-d96902.svg height=22px></a>
<a href=https://github.com/Tencent/Hunyuan3D-2 target="_blank"><img src= https://img.shields.io/badge/Github-bb8a2e.svg?logo=github height=22px></a>
<a href=https://discord.gg/GuaWYwzKbX target="_blank"><img src= https://img.shields.io/badge/Discord-white.svg?logo=discord height=22px></a>
<a href=https://github.com/Tencent/Hunyuan3D-2/blob/main/assets/report/Tencent_Hunyuan3D_2_0.pdf target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>
</div>
[//]: # ( <a href=# target="_blank"><img src=https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv height=22px></a>)
[//]: # ( <a href=# target="_blank"><img src= https://img.shields.io/badge/Colab-8f2628.svg?logo=googlecolab height=22px></a>)
[//]: # ( <a href="#"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/v/mulankit?logo=pypi" height=22px></a>)
<br>
<p align="center">
“ Living out everyone’s imagination on creating and manipulating 3D assets.”
</p>
This repository contains the models of the paper [Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation](https://huggingface.co/papers/2501.12202).
**Hunyuan3D-2mv** is finetuned from [Hunyuan3D-2](https://huggingface.co/tencent/Hunyuan3D-2) to support multiview controlled shape generation.
## 🤗 Get Started with Hunyuan3D 2mv
Here is a simple usage:
```python
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained(
'tencent/Hunyuan3D-2mv',
subfolder='hunyuan3d-dit-v2-mv',
use_safetensors=True,
device='cuda'
)
mesh = pipeline(
image={
"front": "your front view image.png",
"left": "your left view image.png",
"back": "your back view image.png"
},
num_inference_steps=30,
octree_resolution=380,
num_chunks=20000,
generator=torch.manual_seed(12345),
output_type='trimesh'
)[0]
```
For code and more details on how to use it, refer to the [Github repository](https://github.com/Tencent/Hunyuan3D-2).
## 🔗 BibTeX
If you found this repository helpful, please cite our report:
```bibtex
@misc{hunyuan3d22025tencent,
title={Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation},
author={Tencent Hunyuan3D Team},
year={2025},
eprint={2501.12202},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Tencent Hunyuan3D Team},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Community Resources
Thanks for the contributions of community members, here we have these great extensions of Hunyuan3D 2.0:
- [ComfyUI-Hunyuan3DWrapper](https://github.com/kijai/ComfyUI-Hunyuan3DWrapper)
- [Hunyuan3D-2-for-windows](https://github.com/sdbds/Hunyuan3D-2-for-windows)
- [📦 A bundle for running on Windows | 整合包](https://github.com/YanWenKun/Comfy3D-WinPortable/releases/tag/r8-hunyuan3d2)
## Acknowledgements
We would like to thank the contributors to
the [DINOv2](https://github.com/facebookresearch/dinov2), [Stable Diffusion](https://github.com/Stability-AI/stablediffusion), [FLUX](https://github.com/black-forest-labs/flux), [diffusers](https://github.com/huggingface/diffusers)
and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
tencent/Hunyuan3D-1
|
tencent
| 2025-08-18T13:59:07Z | 1,758 | 306 |
hunyuan3d-2
|
[
"hunyuan3d-2",
"diffusers",
"safetensors",
"image-to-3d",
"text-to-3d",
"en",
"zh",
"arxiv:2411.02293",
"license:other",
"region:us"
] |
image-to-3d
| 2024-11-01T08:42:28Z |
---
library_name: hunyuan3d-2
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/tencent/Hunyuan3D-1/blob/main/LICENSE.txt
language:
- en
- zh
tags:
- image-to-3d
- text-to-3d
pipeline_tag: image-to-3d
extra_gated_eu_disallowed: true
---
<!-- ## **Hunyuan3D-1.0** -->
<p align="center">
<img src="./assets/logo.png" height=200>
</p>
# Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
<div align="center">
<a href="https://github.com/tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github-pages"></a>  
<a href="https://3d.hunyuan.tencent.com"><img src="https://img.shields.io/static/v1?label=Homepage&message=Tencent Hunyuan3D&color=blue&logo=github-pages"></a>  
<a href="https://arxiv.org/pdf/2411.02293"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red&logo=arxiv"></a>  
<a href="https://huggingface.co/Tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Checkpoints&message=HuggingFace&color=yellow"></a>  
<a href="https://huggingface.co/spaces/Tencent/Hunyuan3D-1"><img src="https://img.shields.io/static/v1?label=Demo&message=HuggingFace&color=yellow"></a>  
</div>
## 🔥🔥🔥 News!!
* Nov 5, 2024: 💬 We support demo running image_to_3d generation now. Please check the [script](#using-gradio) below.
* Nov 5, 2024: 💬 We support demo running text_to_3d generation now. Please check the [script](#using-gradio) below.
## 📑 Open-source Plan
- [x] Inference
- [x] Checkpoints
- [ ] Baking related
- [ ] Training
- [ ] ComfyUI
- [ ] Distillation Version
- [ ] TensorRT Version
## **Abstract**
<p align="center">
<img src="./assets/teaser.png" height=450>
</p>
While 3D generative models have greatly improved artists' workflows, the existing diffusion models for 3D generation suffer from slow generation and poor generalization. To address this issue, we propose a two-stage approach named Hunyuan3D-1.0 including a lite version and a standard version, that both support text- and image-conditioned generation.
In the first stage, we employ a multi-view diffusion model that efficiently generates multi-view RGB in approximately 4 seconds. These multi-view images capture rich details of the 3D asset from different viewpoints, relaxing the tasks from single-view to multi-view reconstruction. In the second stage, we introduce a feed-forward reconstruction model that rapidly and faithfully reconstructs the 3D asset given the generated multi-view images in approximately 7 seconds. The reconstruction network learns to handle noises and in-consistency introduced by the multi-view diffusion and leverages the available information from the condition image to efficiently recover the 3D structure.
Our framework involves the text-to-image model, i.e., Hunyuan-DiT, making it a unified framework to support both text- and image-conditioned 3D generation. Our standard version has 3x more parameters than our lite and other existing model. Our Hunyuan3D-1.0 achieves an impressive balance between speed and quality, significantly reducing generation time while maintaining the quality and diversity of the produced assets.
## 🎉 **Hunyuan3D-1 Architecture**
<p align="center">
<img src="./assets/overview_3.png" height=400>
</p>
## 📈 Comparisons
We have evaluated Hunyuan3D-1.0 with other open-source 3d-generation methods, our Hunyuan3D-1.0 received the highest user preference across 5 metrics. Details in the picture on the lower left.
The lite model takes around 10 seconds to produce a 3D mesh from a single image on an NVIDIA A100 GPU, while the standard model takes roughly 25 seconds. The plot laid out in the lower right demonstrates that Hunyuan3D-1.0 achieves an optimal balance between quality and efficiency.
<p align="center">
<img src="./assets/radar.png" height=300>
<img src="./assets/runtime.png" height=300>
</p>
## Get Started
#### Begin by cloning the repository:
```shell
git clone https://github.com/tencent/Hunyuan3D-1
cd Hunyuan3D-1
```
#### Installation Guide for Linux
We provide an env_install.sh script file for setting up environment.
```
# step 1, create conda env
conda create -n hunyuan3d-1 python=3.9 or 3.10 or 3.11 or 3.12
conda activate hunyuan3d-1
# step 2. install torch realated package
which pip # check pip corresponds to python
# modify the cuda version according to your machine (recommended)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
# step 3. install other packages
bash env_install.sh
```
<details>
<summary>💡Other tips for envrionment installation</summary>
Optionally, you can install xformers or flash_attn to acclerate computation:
```
pip install xformers --index-url https://download.pytorch.org/whl/cu121
```
```
pip install flash_attn
```
Most environment errors are caused by a mismatch between machine and packages. You can try manually specifying the version, as shown in the following successful cases:
```
# python3.9
pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118
```
when install pytorch3d, the gcc version is preferably greater than 9, and the gpu driver should not be too old.
</details>
#### Download Pretrained Models
The models are available at [https://huggingface.co/tencent/Hunyuan3D-1](https://huggingface.co/tencent/Hunyuan3D-1):
+ `Hunyuan3D-1/lite`, lite model for multi-view generation.
+ `Hunyuan3D-1/std`, standard model for multi-view generation.
+ `Hunyuan3D-1/svrm`, sparse-view reconstruction model.
To download the model, first install the huggingface-cli. (Detailed instructions are available [here](https://huggingface.co/docs/huggingface_hub/guides/cli).)
```shell
python3 -m pip install "huggingface_hub[cli]"
```
Then download the model using the following commands:
```shell
mkdir weights
huggingface-cli download tencent/Hunyuan3D-1 --local-dir ./weights
mkdir weights/hunyuanDiT
huggingface-cli download Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled --local-dir ./weights/hunyuanDiT
```
#### Inference
For text to 3d generation, we supports bilingual Chinese and English, you can use the following command to inference.
```python
python3 main.py \
--text_prompt "a lovely rabbit" \
--save_folder ./outputs/test/ \
--max_faces_num 90000 \
--do_texture_mapping \
--do_render
```
For image to 3d generation, you can use the following command to inference.
```python
python3 main.py \
--image_prompt "/path/to/your/image" \
--save_folder ./outputs/test/ \
--max_faces_num 90000 \
--do_texture_mapping \
--do_render
```
We list some more useful configurations for easy usage:
| Argument | Default | Description |
|:------------------:|:---------:|:---------------------------------------------------:|
|`--text_prompt` | None |The text prompt for 3D generation |
|`--image_prompt` | None |The image prompt for 3D generation |
|`--t2i_seed` | 0 |The random seed for generating images |
|`--t2i_steps` | 25 |The number of steps for sampling of text to image |
|`--gen_seed` | 0 |The random seed for generating 3d generation |
|`--gen_steps` | 50 |The number of steps for sampling of 3d generation |
|`--max_faces_numm` | 90000 |The limit number of faces of 3d mesh |
|`--save_memory` | False |module will move to cpu automatically|
|`--do_texture_mapping` | False |Change vertex shadding to texture shading |
|`--do_render` | False |render gif |
We have also prepared scripts with different configurations for reference
- Inference Std-pipeline requires 30GB VRAM (24G VRAM with --save_memory).
- Inference Lite-pipeline requires 22GB VRAM (18G VRAM with --save_memory).
- Note: --save_memory will increase inference time
```bash
bash scripts/text_to_3d_std.sh
bash scripts/text_to_3d_lite.sh
bash scripts/image_to_3d_std.sh
bash scripts/image_to_3d_lite.sh
```
If your gpu memory is 16G, you can try to run modules in pipeline seperately:
```bash
bash scripts/text_to_3d_std_separately.sh 'a lovely rabbit' ./outputs/test # >= 16G
bash scripts/text_to_3d_lite_separately.sh 'a lovely rabbit' ./outputs/test # >= 14G
bash scripts/image_to_3d_std_separately.sh ./demos/example_000.png ./outputs/test # >= 16G
bash scripts/image_to_3d_lite_separately.sh ./demos/example_000.png ./outputs/test # >= 10G
```
#### Using Gradio
We have prepared two versions of multi-view generation, std and lite.
```shell
# std
python3 app.py
python3 app.py --save_memory
# lite
python3 app.py --use_lite
python3 app.py --use_lite --save_memory
```
Then the demo can be accessed through http://0.0.0.0:8080. It should be noted that the 0.0.0.0 here needs to be X.X.X.X with your server IP.
## Camera Parameters
Output views are a fixed set of camera poses:
+ Azimuth (relative to input view): `+0, +60, +120, +180, +240, +300`.
## Citation
If you found this repository helpful, please cite our report:
```bibtex
@misc{yang2024tencent,
title={Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation},
author={Xianghui Yang and Huiwen Shi and Bowen Zhang and Fan Yang and Jiacheng Wang and Hongxu Zhao and Xinhai Liu and Xinzhou Wang and Qingxiang Lin and Jiaao Yu and Lifu Wang and Zhuo Chen and Sicong Liu and Yuhong Liu and Yong Yang and Di Wang and Jie Jiang and Chunchao Guo},
year={2024},
eprint={2411.02293},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
BinBashir/roberta_on_jumia_dataset
|
BinBashir
| 2025-08-18T13:57:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T13:57:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yamatazen/Shisa-K-12B
|
yamatazen
| 2025-08-18T13:56:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"base_model:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:merge:natong19/Mistral-Nemo-Instruct-2407-abliterated",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:merge:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:yamatazen/Himeyuri-Magnum-12B",
"base_model:merge:yamatazen/Himeyuri-Magnum-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:17:38Z |
---
base_model:
- natong19/Mistral-Nemo-Instruct-2407-abliterated
- yamatazen/Himeyuri-Magnum-12B
- shisa-ai/shisa-v2-mistral-nemo-12b
library_name: transformers
tags:
- mergekit
- merge
language:
- en
- ja
---

# Shisa-K-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Karcher Mean](https://en.wikipedia.org/wiki/Karcher_mean) merge method.
### Models Merged
The following models were included in the merge:
* [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated)
* [yamatazen/Himeyuri-Magnum-12B](https://huggingface.co/yamatazen/Himeyuri-Magnum-12B)
* [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: karcher
dtype: bfloat16
out_dtype: bfloat16
models:
- model: natong19/Mistral-Nemo-Instruct-2407-abliterated
- model: shisa-ai/shisa-v2-mistral-nemo-12b
- model: yamatazen/Himeyuri-Magnum-12B
tokenizer:
source: natong19/Mistral-Nemo-Instruct-2407-abliterated
```
|
Neural-Hacker/distilbert-jee-math-mcq-2025
|
Neural-Hacker
| 2025-08-18T13:55:38Z | 0 | 1 | null |
[
"safetensors",
"distilbert",
"en",
"dataset:PhysicsWallahAI/JEE-Main-2025-Math",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:mit",
"region:us"
] | null | 2025-08-18T13:35:12Z |
---
license: mit
datasets:
- PhysicsWallahAI/JEE-Main-2025-Math
language:
- en
base_model:
- distilbert/distilbert-base-uncased
---
DistilBERT JEE MCQ Classifier
This model is a fine-tuned DistilBERT (base uncased) designed to classify correct answers for JEE-style multiple-choice math questions. It selects the correct option among four choices (A, B, C, D).
-------------------------------------------------------------------------
Training Data
Source: PhysicsWallahAI JEE Main 2025 Math dataset (Jan + Apr shifts)
Filtered: Only multiple-choice questions (MCQs) were used.
Size: Combined January and April shifts, split into 80% train and 20% test.
-------------------------------------------------------------------------
Training Details
Base model: distilbert-base-uncased
Epochs: 10
Batch size: 4
Learning rate: 1e-5
Weight decay: 0.1
-------------------------------------------------------------------------
Results
Evaluation accuracy: 40%
Evaluation loss: ~1.42
-------------------------------------------------------------------------
Limitations
Accuracy is higher than random guess (25%) but not suitable for real exam preparation.
Trained only on Math MCQs from JEE Main 2025 dataset.
Does not handle numerical/subjective questions.
-------------------------------------------------------------------------
Intended Use
Research and experimentation with MCQ-style classification.
Baseline model for further fine-tuning or impro
|
tdimeo/distilbert-base-uncased-finetuned-squad-d5716d28
|
tdimeo
| 2025-08-18T13:54:52Z | 0 | 0 | null |
[
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-08-18T13:47:01Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
rayonlabs/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
|
rayonlabs
| 2025-08-18T13:54:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B",
"lora",
"transformers",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:53:54Z |
---
library_name: peft
tags:
- axolotl
- base_model:adapter:/cache/models/deepseek-ai--DeepSeek-R1-Distill-Qwen-32B
- lora
- transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
pipeline_tag: text-generation
model-index:
- name: app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.0.dev0`
```yaml
adapter: lora
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- bd2e9445-f8a4-4518-bd75-52166c2ec2b9_train_data.json
ds_type: json
format: custom
path: /workspace/axolotl/data
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp: true
debug: null
deepspeed: null
device_map: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
group_by_length: true
hub_model_id: null
hub_private_repo: false
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 2220
micro_batch_size: 20
mlflow_experiment_name: /workspace/axolotl/data/bd2e9445-f8a4-4518-bd75-52166c2ec2b9_train_data.json
model_card: false
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_bnb_8bit
output_dir: /app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
push_every_save: true
push_to_hub: true
resume_from_checkpoint: null
rl: null
s2_attention: null
sample_packing: true
save_steps: 100
save_strategy: steps
save_total_limit: 1
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trl: null
trust_remote_code: false
use_liger: true
val_set_size: 0.0
wandb_mode: offline
wandb_name: bd2e9445-f8a4-4518-bd75-52166c2ec2b9_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
wandb_project: Gradients-On-Demand
wandb_run: null
wandb_runid: bd2e9445-f8a4-4518-bd75-52166c2ec2b9_benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
warmup_steps: 200
weight_decay: 0
xformers_attention: null
```
</details><br>
# app/checkpoints/bd2e9445-f8a4-4518-bd75-52166c2ec2b9/benchmark-15b733f3-29c3-4bb5-b5a9-4615f043b030-tourn_84e4321ace6ceeb6_20250815-5GU4Xkd3
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 2220
### Training results
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.2
|
annasoli/Qwen2.5-14B_SV_toggle_l24_lr1e-4_a256_KL1e6
|
annasoli
| 2025-08-18T13:51:08Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T20:35:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755523273
|
hakimjustbao
| 2025-08-18T13:50:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:50:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/78_8DU2tt
|
VoilaRaj
| 2025-08-18T13:48:50Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-18T13:44:53Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bitextor/bicleaner-ai-full-de-xx
|
bitextor
| 2025-08-18T13:48:47Z | 0 | 0 | null |
[
"tf",
"xlm-roberta",
"bicleaner-ai",
"de",
"xx",
"multilingual",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2025-08-18T13:43:53Z |
---
language:
- de
- xx
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for de-xx
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
jpacifico/bitnet-dpo-fr-i2s-2
|
jpacifico
| 2025-08-18T13:47:04Z | 28 | 1 | null |
[
"gguf",
"en",
"fr",
"arxiv:2504.12285",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-19T13:08:25Z |
---
license: mit
language:
- en
- fr
---
## Model Summary
- **Family:** BitNet b1.58 (ternary weights `{-1, 0, +1}` with abs-mean scaling)
- **Post-training recipe:** bilingual DPO (FR+EN) + **ModelStock**/**TIES** merges to combine FR-centric and EN-centric variants (agent-oriented behaviors; pragmatic reasoning).
- **This repo:** **GGUF** weights for efficient local inference with **bitnet.cpp**.
- **Training & provenance:** see the BF16 model card for full details of datasets, merges, and configuration.
**Upstream references**
- **Technical Report:** [BitNet b1.58 2B4T Technical Report (Microsoft Research, 2025)](https://arxiv.org/abs/2504.12285). Contains the official description of the GGUF variant **“used for bitnet.cpp”** and the lossless-inference note.
- **Official GGUF base model (Microsoft):** [microsoft/bitnet-b1.58-2B-4T-gguf](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf)
- **bitnet.cpp (official inference framework):** [microsoft/BitNet on GitHub](https://github.com/microsoft/BitNet)
---
## About “lossless” (what it means here)
Microsoft’s report states that the CPU reference implementation **“ensur[es] numerical accuracy (lossless inference relative to the training procedure)”** when running BitNet b1.58 models via `bitnet.cpp`.
- In practice, this means the **1.58-bit packed weights** used at train time are executed **as-is** by the specialized kernels; the GGUF container is simply the delivery format consumed by `bitnet.cpp` for these kernels.
- Microsoft’s GGUF model card also explicitly presents the **GGUF** variant as the format **“compatible with the `bitnet.cpp` library”**.
> **Note:** Efficiency claims (memory/latency/energy) and the “lossless” inference property apply **when using `bitnet.cpp`**. Running the model through generic paths (e.g., vanilla Transformers) doesn’t unlock those kernel-level advantages. See Microsoft’s GGUF page and `bitnet.cpp` README.
---
## Intended Use
- **Great for:** agent-oriented assistants, bilingual instruction following, pragmatic reasoning, and everyday knowledge tasks — **on CPUs or modest GPUs** using `bitnet.cpp`.
- **Not optimized for:** formal math or code generation (see BF16 card for details and alternatives).
---
## Files
- `*.gguf` — 1.58-bit GGUF weights for BitNet b1.58 (Aramis-2B).
Check the **Files** tab for filenames and sizes.
---
## How to run (bitnet.cpp)
You can run this model using my demo Colab Notebook (TBD)
Please refer to the [bitnet.cpp](https://github.com/microsoft/BitNet) GitHub repository for detailed compilation steps, usage examples, and command-line options.
**Disclamer**
This model is intended for research and development purposes only and should not be used in commercial or real-world applications without further testing. While the Microsoft Research team has applied SFT and DPO to align the BitNet base model, it may still produce unexpected, biased, or inaccurate outputs. Please use responsibly.
- **Developed by:** Jonathan Pacifico, 2025
- **Model type:** LLM
- **Language(s) (NLP):** French, English
- **License:** MIT
Made with ❤️ in France
|
m-strzelczyk/gemma-3-4b-seo-optimized
|
m-strzelczyk
| 2025-08-18T13:46:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:34:41Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-3-4b-seo-optimized
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-4b-seo-optimized
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="m-strzelczyk/gemma-3-4b-seo-optimized", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755523089
|
sampingkaca72
| 2025-08-18T13:43:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:43:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
codingwithlewis/gemma-3-regex
|
codingwithlewis
| 2025-08-18T13:42:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:quantized:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T13:37:15Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** codingwithlewis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_2_prover1_
|
neural-interactive-proofs
| 2025-08-18T13:41:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:40:46Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_2_prover1_
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_2_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_2_prover1_", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/qwen2_5-32b-instruct_dpo_2025-08-18_13-57-06_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_2_prover1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 3.0.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.