modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF
|
Guilherme34
| 2025-08-19T20:18:39Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed",
"base_model:quantized:Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T20:18:10Z |
---
license: other
language:
- en
tags:
- llama-cpp
- gguf-my-repo
base_model: Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed
---
# Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF
This model was converted to GGUF format from [`Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed`](https://huggingface.co/Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF --hf-file samantha-mythomax-l2-13b-merge-fixed-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF --hf-file samantha-mythomax-l2-13b-merge-fixed-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF --hf-file samantha-mythomax-l2-13b-merge-fixed-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Guilherme34/Samantha-Mythomax-l2-13b-merge-fixed-Q3_K_L-GGUF --hf-file samantha-mythomax-l2-13b-merge-fixed-q3_k_l.gguf -c 2048
```
|
roeker/blockassist-bc-quick_wiry_owl_1755634582
|
roeker
| 2025-08-19T20:17:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:17:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755633003
|
calegpedia
| 2025-08-19T20:15:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:15:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bryanzhou008/vit-base-patch16-224-in21k-finetuned-inaturalist
|
bryanzhou008
| 2025-08-19T20:15:20Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-30T19:48:56Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-inaturalist
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8541666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-inaturalist
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the inaturalist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.8421 | 4 | 3.1793 | 0.0347 |
| No log | 1.8947 | 9 | 3.1647 | 0.0486 |
| 3.1648 | 2.9474 | 14 | 3.1382 | 0.0944 |
| 3.1648 | 4.0 | 19 | 3.0995 | 0.1556 |
| 3.0817 | 4.8421 | 23 | 3.0555 | 0.2639 |
| 3.0817 | 5.8947 | 28 | 2.9849 | 0.3889 |
| 2.9167 | 6.9474 | 33 | 2.8932 | 0.5139 |
| 2.9167 | 8.0 | 38 | 2.7775 | 0.5972 |
| 2.6682 | 8.8421 | 42 | 2.6706 | 0.6528 |
| 2.6682 | 9.8947 | 47 | 2.5233 | 0.7069 |
| 2.3659 | 10.9474 | 52 | 2.3859 | 0.7375 |
| 2.3659 | 12.0 | 57 | 2.2546 | 0.75 |
| 2.079 | 12.8421 | 61 | 2.1531 | 0.7528 |
| 2.079 | 13.8947 | 66 | 2.0372 | 0.75 |
| 1.828 | 14.9474 | 71 | 1.9339 | 0.7597 |
| 1.828 | 16.0 | 76 | 1.8403 | 0.7694 |
| 1.6253 | 16.8421 | 80 | 1.7733 | 0.7764 |
| 1.6253 | 17.8947 | 85 | 1.6914 | 0.7903 |
| 1.4502 | 18.9474 | 90 | 1.6153 | 0.7875 |
| 1.4502 | 20.0 | 95 | 1.5510 | 0.7986 |
| 1.4502 | 20.8421 | 99 | 1.5016 | 0.8 |
| 1.2959 | 21.8947 | 104 | 1.4454 | 0.8222 |
| 1.2959 | 22.9474 | 109 | 1.3912 | 0.8181 |
| 1.1802 | 24.0 | 114 | 1.3390 | 0.8333 |
| 1.1802 | 24.8421 | 118 | 1.2995 | 0.8333 |
| 1.0629 | 25.8947 | 123 | 1.2707 | 0.8389 |
| 1.0629 | 26.9474 | 128 | 1.2335 | 0.8361 |
| 0.9801 | 28.0 | 133 | 1.1975 | 0.8444 |
| 0.9801 | 28.8421 | 137 | 1.1672 | 0.8389 |
| 0.9076 | 29.8947 | 142 | 1.1338 | 0.8444 |
| 0.9076 | 30.9474 | 147 | 1.1137 | 0.8472 |
| 0.8349 | 32.0 | 152 | 1.0855 | 0.8528 |
| 0.8349 | 32.8421 | 156 | 1.0717 | 0.8542 |
| 0.7782 | 33.8947 | 161 | 1.0483 | 0.8514 |
| 0.7782 | 34.9474 | 166 | 1.0352 | 0.85 |
| 0.7208 | 36.0 | 171 | 1.0202 | 0.8556 |
| 0.7208 | 36.8421 | 175 | 0.9994 | 0.8486 |
| 0.6708 | 37.8947 | 180 | 0.9814 | 0.8556 |
| 0.6708 | 38.9474 | 185 | 0.9691 | 0.8542 |
| 0.6303 | 40.0 | 190 | 0.9599 | 0.8486 |
| 0.6303 | 40.8421 | 194 | 0.9422 | 0.8472 |
| 0.6303 | 41.8947 | 199 | 0.9278 | 0.8486 |
| 0.6018 | 42.9474 | 204 | 0.9172 | 0.8528 |
| 0.6018 | 44.0 | 209 | 0.9093 | 0.8514 |
| 0.5622 | 44.8421 | 213 | 0.9030 | 0.8583 |
| 0.5622 | 45.8947 | 218 | 0.8972 | 0.8625 |
| 0.5474 | 46.9474 | 223 | 0.8859 | 0.8569 |
| 0.5474 | 48.0 | 228 | 0.8858 | 0.8653 |
| 0.5254 | 48.8421 | 232 | 0.8779 | 0.8556 |
| 0.5254 | 49.8947 | 237 | 0.8635 | 0.8569 |
| 0.5036 | 50.9474 | 242 | 0.8563 | 0.8611 |
| 0.5036 | 52.0 | 247 | 0.8613 | 0.8542 |
| 0.4855 | 52.8421 | 251 | 0.8546 | 0.8625 |
| 0.4855 | 53.8947 | 256 | 0.8469 | 0.8597 |
| 0.4697 | 54.9474 | 261 | 0.8327 | 0.8528 |
| 0.4697 | 56.0 | 266 | 0.8268 | 0.8597 |
| 0.4482 | 56.8421 | 270 | 0.8188 | 0.8556 |
| 0.4482 | 57.8947 | 275 | 0.8171 | 0.8653 |
| 0.4436 | 58.9474 | 280 | 0.8133 | 0.8486 |
| 0.4436 | 60.0 | 285 | 0.8070 | 0.8639 |
| 0.4436 | 60.8421 | 289 | 0.7986 | 0.8542 |
| 0.4211 | 61.8947 | 294 | 0.7937 | 0.8597 |
| 0.4211 | 62.9474 | 299 | 0.7908 | 0.8611 |
| 0.4228 | 64.0 | 304 | 0.7952 | 0.8625 |
| 0.4228 | 64.8421 | 308 | 0.8010 | 0.8514 |
| 0.4046 | 65.8947 | 313 | 0.7975 | 0.8472 |
| 0.4046 | 66.9474 | 318 | 0.7927 | 0.8417 |
| 0.4048 | 68.0 | 323 | 0.7880 | 0.8556 |
| 0.4048 | 68.8421 | 327 | 0.7860 | 0.8514 |
| 0.3925 | 69.8947 | 332 | 0.7899 | 0.8403 |
| 0.3925 | 70.9474 | 337 | 0.7883 | 0.8417 |
| 0.3936 | 72.0 | 342 | 0.7885 | 0.8417 |
| 0.3936 | 72.8421 | 346 | 0.7874 | 0.8361 |
| 0.3985 | 73.8947 | 351 | 0.7832 | 0.8417 |
| 0.3985 | 74.9474 | 356 | 0.7787 | 0.8514 |
| 0.3849 | 76.0 | 361 | 0.7753 | 0.8486 |
| 0.3849 | 76.8421 | 365 | 0.7746 | 0.8514 |
| 0.3796 | 77.8947 | 370 | 0.7736 | 0.8542 |
| 0.3796 | 78.9474 | 375 | 0.7731 | 0.8528 |
| 0.3717 | 80.0 | 380 | 0.7715 | 0.8556 |
| 0.3717 | 80.8421 | 384 | 0.7709 | 0.8556 |
| 0.3717 | 81.8947 | 389 | 0.7706 | 0.8569 |
| 0.3802 | 82.9474 | 394 | 0.7704 | 0.8556 |
| 0.3802 | 84.0 | 399 | 0.7704 | 0.8542 |
| 0.3782 | 84.2105 | 400 | 0.7703 | 0.8542 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.1
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755634467
|
Dejiat
| 2025-08-19T20:15:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:14:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/mapdraw-flux
|
Muapi
| 2025-08-19T20:15:07Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:14:54Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# MapDraw (FLUX)

**Base model**: Flux.1 D
**Trained words**: m4pdr4w
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:777351@869395", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/a-better-wolf
|
Muapi
| 2025-08-19T20:14:22Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:12:38Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# A Better Wolf

**Base model**: Flux.1 D
**Trained words**: wolf, snarling, black, white, ears forward, ears back, pack of wolves
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:412694@725614", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755632890
|
koloni
| 2025-08-19T20:13:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:13:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755632706
|
vwzyrraz7l
| 2025-08-19T20:11:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:11:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755634209
|
Dejiat
| 2025-08-19T20:10:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:10:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
reece124/OpenCUA-7B-converted
|
reece124
| 2025-08-19T20:10:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"VLM",
"Computer-Use-Agent",
"OS-Agent",
"GUI",
"Grounding",
"image-text-to-text",
"conversational",
"en",
"dataset:xlangai/AgentNet",
"dataset:xlangai/aguvis-stage1",
"dataset:smolagents/aguvis-stage-2",
"dataset:osunlp/UGround-V1-Data",
"arxiv:2508.09123",
"arxiv:2504.07981",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-19T20:10:19Z |
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- xlangai/AgentNet
- xlangai/aguvis-stage1
- smolagents/aguvis-stage-2
- osunlp/UGround-V1-Data
language:
- en
license: mit
metrics:
- accuracy
- code_eval
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- VLM
- Computer-Use-Agent
- OS-Agent
- GUI
- Grounding
---
<h1 style="
font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Helvetica,Arial,sans-serif;
font-size:48px;
font-weight:700;
line-height:1.25;
text-align:center;
margin:0 0 24px;">
OpenCUA: Open Foundations for Computer-Use Agents
</h1>
<div style="
display:flex;
justify-content:center;
gap:12px;
flex-wrap:wrap;
margin-bottom:28px;">
<a href="https://opencua.xlang.ai/" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
🌐 Website
</a>
<a href="https://arxiv.org/abs/2508.09123" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
📝 Paper
</a>
<a href="https://github.com/xlang-ai/OpenCUA" style="
display:inline-block;
padding:8px 24px;
background:#2b2b2b;
color:#ffffff;
border-radius:36px;
text-decoration:none;
font-weight:600;
font-size:16px;">
💻 Code
</a>
</div>
<div style="max-width:900px;margin:0 auto;">
# Introduction
<div style="
max-width: 880px; /* 可按需调节整体宽度 */
margin: 0 auto; /* 居中容器 */
text-align: justify; /* 关键:两端对齐 */
text-justify: inter-word; /* 优化英文对齐效果 */
line-height: 1.6;">
OpenCUA models (OpenCUA-7B and OpenCUA-32B) are end-to-end computer-use foundation models than can produce executable actions in the computer environments. They are based on the weights of Qwen2.5-VL-7B-Instruction and Qwen2.5-VL-32B-Instruction.
They demonstrate superior performance across CUA benchmarks. In particular, <b>OpenCUA-32B</b> achieves an average success rate of **34.8%** on [OSWorld-Verified](https://os-world.github.io/),
establishing a new state-of-the-art (SOTA) among open-source models and surpassing OpenAI CUA (GPT-4o). Both models also have strong grounding performance, OpenCUA-32B achieves 59.6% on [OSWorld-G](https://osworld-grounding.github.io/) and 55.3% on [Screenspot-Pro](https://arxiv.org/abs/2504.07981).
</div>
### Key Features
- **Superior Computer-Use Capablity**: Able to execute multi-step computer-use actions with effective planning and reasoning
- **Multi-OS Support**: Trained on demonstrations across Ubuntu, Windows, and macOS
- **Visual Grounding**: Strong GUI element recognition and spatial reasoning capabilities
- **Multi-Image Context**: Processes up to 3 screenshot history for better context understanding
- **Reflective Reasoning**: Enhanced with reflective long Chain-of-Thought that identifies errors and provides corrective reasoning
# Performance
### Online Agent Evaluation
OpenCUA models achieves strong performance on **[OSWorld-Verified](https://os-world.github.io/)**.
OPENCUA-32B achieves the best performance among all open-source models with an average success rate of 34.8%, outperforming prior baselines by large margins.
It also closes the gap to proprietary Claude models.
<div align="center">
| **Model** | **15 Steps** | **50 Steps** | **100 Steps** |
|-------------------------------|:--------:|:--------:|:---------:|
| **Proprietary** | | | |
| OpenAI CUA | 26.0 | 31.3 | 31.4 |
| Seed 1.5-VL | 27.9 | — | 34.1 |
| Claude 3.7 Sonnet | 27.1 | 35.8 | 35.9 |
| Claude 4 Sonnet | 31.2 | 43.9 | 41.5 |
| **Open-Source** | | | |
| Qwen 2.5-VL-32B-Instruct | 3.0 | — | 3.9 |
| Qwen 2.5-VL-72B-Instruct | 4.4 | — | 5.0 |
| Kimi-VL-A3B | 9.7 | — | 10.3 |
| UI-TARS-72B-DPO | 24.0 | 25.8 | 27.1 |
| UI-TARS-1.5-7B | 24.5 | 27.3 | 27.4 |
| OpenCUA-7B *(Ours)* | 24.3 | 27.9 | 26.6 |
| **OpenCUA-32B *(Ours)*** | **29.7** | **34.1** | **34.8** |
</div>
*OpenCUA scores are the mean of 3 independent runs.*
### GUI Grounding Performance
<div align="center">
| **Model** | **OSWorld-G** | **ScreenSpot-V2** | **ScreenSpot-Pro** |
|-------|-----------|---------------|----------------|
| Qwen2.5-VL-7B | 31.4 | 88.8 | 27.6 |
| Qwen2.5-VL-32B | 46.5 | 87.0 | 39.4 |
| UI-TARS-72B | 57.1 | 90.3 | 38.1 |
| **OpenCUA-A3B** | 48.6 | 91.4 | 28.5 |
| **OpenCUA-Qwen2-7B** | 45.7 | 88.5 | 23.7 |
| **OpenCUA-7B** | 55.3 | 92.3 | 50.0 |
| **OpenCUA-32B** | **59.6** | **93.4** | **55.3** |
</div>
### AgentNetBench (Offline Evaluation)
<div align="center">
| **Model** | **Coordinate Actions** | **Content Actions** | **Function Actions** | **Average** |
|-------|-------------------|-----------------|------------------|---------|
| Qwen2.5-VL-7B | 50.7 | 40.8 | 3.1 | 48.0 |
| Qwen2.5-VL-32B | 66.6 | 47.2 | 41.5 | 64.8 |
| Qwen2.5-VL-72B | 67.2 | 52.6 | 50.5 | 67.0 |
| OpenAI CUA | 71.7 | 57.3 | **80.0** | 73.1 |
| **OpenCUA-7B** | 79.0 | 62.0 | 44.3 | 75.2 |
| **OpenCUA-32B** | **81.9** | 66.1 | 55.7 | **79.1** |
</div>
# 🚀 Quick Start
<div style="border-left: 6px solid #f28c28; background: #fff8e6; padding: 12px 16px; margin: 16px 0;">
<strong>⚠️ Important for Qwen-based Models (OpenCUA-7B, OpenCUA-32B):</strong>
To align with our training infrastructure, we have modified the model in two places:
<ul style="margin-top: 8px;">
<li>1. Multimodal Rotary Position Embedding (M-RoPE) has been replaced with 1D RoPE</strong>.</li>
<li>2. Using the same Tokenizer and ChatTemplate as Kimi-VL.</li>
<li>Do not use the default transformers and vllm classes to load the model. Tokenizer and Chat Template should be aligned if training the models.</li>
</ul>
</div>
## Installation & Download
First, install the required transformers dependencies:
```bash
conda create -n opencua python=3.10
conda activate opencua
pip install -r requirement.txt
```
Download the model weight from huggingface:
```bash
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="xlangai/OpenCUA-7B",
local_dir="OpenCUA-7B",
local_dir_use_symlinks=False
)
```
## 🎯 GUI Grounding
The following code demonstrates how to use OpenCUA models for GUI grounding tasks:
```python
import base64
import torch
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
from PIL import Image
import json
def encode_image(image_path: str) -> str:
"""Encode image to base64 string for model input."""
with open(image_path, "rb") as f:
return base64.b64encode(f.read()).decode()
def load_opencua_model(model_path: str):
"""Load OpenCUA model, tokenizer, and image processor."""
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
image_processor = AutoImageProcessor.from_pretrained(model_path, trust_remote_code=True)
return model, tokenizer, image_processor
def create_grounding_messages(image_path: str, instruction: str):
"""Create chat messages for GUI grounding task."""
system_prompt = (
"You are a GUI agent. You are given a task and a screenshot of the screen. "
"You need to perform a series of pyautogui actions to complete the task."
)
messages = [
{"role": "system", "content": system_prompt},
{
"role": "user",
"content": [
{"type": "image", "image": f"data:image/png;base64,{encode_image(image_path)}"},
{"type": "text", "text": instruction},
],
},
]
return messages
def run_inference(model, tokenizer, image_processor, messages, image_path):
"""Run inference on the model."""
# Prepare text input
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)
input_ids = torch.tensor([input_ids]).to(model.device)
# Prepare image input
image = Image.open(image_path).convert('RGB')
image_info = image_processor.preprocess(images=[image])
pixel_values = torch.tensor(image_info['pixel_values']).to(
dtype=torch.bfloat16, device=model.device
)
grid_thws = torch.tensor(image_info['image_grid_thw'])
# Generate response
with torch.no_grad():
generated_ids = model.generate(
input_ids,
pixel_values=pixel_values,
grid_thws=grid_thws,
max_new_tokens=512,
temperature=0
)
# Decode output
prompt_len = input_ids.shape[1]
generated_ids = generated_ids[:, prompt_len:]
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
return output_text
# Example usage
model_path = "OpenCUA/OpenCUA-7B" # or other model variants
image_path = "screenshot.png"
instruction = "Click on the submit button"
# Load model
model, tokenizer, image_processor = load_opencua_model(model_path)
# Create messages and run inference
messages = create_grounding_messages(image_path, instruction)
result = run_inference(model, tokenizer, image_processor, messages, image_path)
print("Model output:", result)
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<em>Expected result:</em> ```python
pyautogui.click(x=1443, y=343)
```
</div>
You can also run the five grounding examples in [OpenCUA/model/inference/huggingface_inference.py](https://github.com/xlang-ai/OpenCUA/blob/main/model/inference/huggingface_inference.py):
```
cd ./model/inference/
python huggingface_inference.py
```
## 🖥️ Computer Use Agent
**[OpenCUAAgent](https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/opencua_agent.py)** is developed in the [OSWorld](https://github.com/xlang-ai/OSWorld) environment based on OpenCUA models. It iteratively perceives the environment via screenshots, produces reflective long CoT as inner monologue, and predicts the next action to be executed. OpenCUAAgent uses 3 images in total and L2 CoT format in default.
Command for running OpenCUA-7B and OpenCUA-32B in OSWorld:
```
python run_multienv_opencua.py \
--headless \
--observation_type screenshot \
--model OpenCUA-32B \
--result_dir ./results --test_all_meta_path evaluation_examples/test_all_no_gdrive.json \
--max_steps 100 \
--num_envs 30 \
--coordinate_type qwen25
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<em>Currently we only supports huggingface inference. We are implementing the vLLM supports of OpenCUA models. Please stay tuned.</em>
</div>
---
# AgentNet Dataset - Large-Scale Computer-Use Dataset
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/dw5k183ucDSB2SZuS5f2V.png" width="400" alt="AgentNet Dataset Domain Distribution">
</div>
AgentNet is the first large-scale desktop computer-use agent trajectory dataset, containing 22.6K human-annotated computer-use tasks across Windows, macOS, and Ubuntu systems.
👉 **[AgentNet Huggingface Dataset](https://huggingface.co/datasets/xlangai/AgentNet)**
Download the dataset here:
```
pip install -U huggingface_hub
huggingface-cli download xlangai/AgentNet --repo-type dataset --local-dir ./AgentNet
```
Collecting computer-use agent training data requires 3 steps:
- Demonstrate human computer-use task via [AgentNetTool](https://agentnet-tool.xlang.ai/);
- Preprocess the demonstration using [Action Reduction & State-Action Matching](./data/data-processor);
- For each step, [synthesize reflective long CoT](./data/cot-generator)
## 1 AgentNetTool – Annotation & Verification Tool
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/ETjCOoIRR7f1YZCJ2kfiW.png" width="700" alt="AgentNet Tool">
</div>
Our **AgentNetTool** is a cross-platform GUI recorder that runs unobtrusively on annotators’ machines. It captures synchronized **screen video**, **mouse/keyboard events**, and **accessibility trees**, then provides an in-browser UI for reviewing, trimming, and submitting demonstrations. AgentNet Tool is available on Windows, macOS and Ubuntu.
👉 **[AgentNetTool Document](https://agentnet-tool.xlang.ai/)**
## 2 DataProcessor – Action Reduction & State–Action Matching
Raw demonstrations can contain thousands of low-level events that are too dense for model training.
The **DataProcessor** module (`./data/data-process/`) performs two key steps:
1. **Action Reduction** — merges granular signals into concise, semantically meaningful PyAutoGUI actions (e.g., collapsing mouse moves → click, coalescing scrolls, grouping key-press sequences into text or hotkeys).
2. **State–Action Matching** — aligns every reduced action with the *last visually distinct frame* **before** the action begins, avoiding future-information leakage and yielding compact state–action pairs.
These processed trajectories underlie all downstream training and evaluation.
---
## 3 CoTGenerator – Synthesizing Reflective Long Chain-of-Thought Inner Monologue
To boost robustness and interpretability, we augment each trajectory with **reflective long Chain-of-Thought (CoT) reasoning**.
The **CoTGenerator** pipeline (`./data/cot-generator/`) synthesizes step-level reflections that:
* reflect on the previous action,
* explain *why* an action is chosen given the current observation and history,
* note potential alternative actions, and
* forecast the expected next state.
Empirically, models trained with these rich CoTs scale better with data and generalize across unseen applications.
# Evaluation
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/emy1QCJwQj9KqHkVmtNH2.png" width="800" alt="AgentNetBench">
</div>
**AgentNetBench** (`./AgentNetBench/`) provides a realistic offline evaluator for OS agent trajectories. It compares model-predicted low-level actions (click, moveTo, write, press, scroll, terminate, etc.) against ground-truth human actions and reports detailed metrics.
👉 See **[AgentNetBench/README.md](./evaluation/agentnetbench/README.md)** for usage instructions.
# TODO
## vLLM Support
We are actively working with the vLLM team to add support for OpenCUA models.
**Workaround:** For now, please use the standard transformers library as shown in the examples above. We will update this section once vLLM support becomes available.
## Training Code
OpenCUA models are developed based on the training infrastructure of Kimi Team. We are developting the training pipeline based on the open-source infrastructure as well.
# Acknowledge
<p>
We thank Su Yu, Caiming Xiong, Binyuan Hui, and the anonymous reviewers for their insightful discussions and valuable feedback.
We are grateful to Moonshot AI for providing training infrastructure and annotated data.
We also sincerely appreciate Calvin, Ziwei Chen, Jin Zhang, Ze Li, Zhengtao Wang, Yanxu Chen, and Qizheng Gu from the Kimi Team for their strong infrastructure support and helpful guidance.
The development of our tool is based on the open-source projects-<a href="https://github.com/TheDuckAI/DuckTrack" target="_blank">DuckTrack</a> and <a href="https://github.com/OpenAdaptAI/OpenAdapt" target="_blank">OpenAdapt</a>.
We are very grateful to their commitment to the open source community. Finally, we extend our deepest thanks to all annotators for their tremendous effort and contributions to this project.
</p>
# License
This project is licensed under the MIT License - see the LICENSE file in the root folder for details.
## Research Use and Disclaimer
OpenCUA models are intended for **research and educational purposes only**.
### Prohibited Uses
- The model may **not** be used for any purpose or activity that violates applicable laws or regulations in any jurisdiction
- Use for illegal, unethical, or harmful activities is strictly prohibited
### Disclaimer
- The authors, contributors, and copyright holders are **not responsible** for any illegal, unethical, or harmful use of the Software, nor for any direct or indirect damages resulting from such use
- Use of the "OpenCUA" name, logo, or trademarks does **not** imply any endorsement or affiliation unless separate written permission is obtained
- Users are solely responsible for ensuring their use complies with applicable laws and regulations
## Important Notes on Coordinate Systems
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<ul style="margin: 0;">
<li><strong><code>OpenCUA/OpenCUA-A3B</code></strong> – Relative coordinates <em>(not supported in this code)</em></li>
<li><strong><code>OpenCUA/OpenCUA-Qwen2-7B</code></strong> – Relative coordinates</li>
<li><strong><code>OpenCUA/OpenCUA-7B</code></strong> – Absolute coordinates</li>
<li><strong><code>OpenCUA/OpenCUA-32B</code></strong> – Absolute coordinates</li>
</ul>
</div>
**OpenCUA models use different coordinate systems depending on the base model:**
- **OpenCUA-Qwen2-7B**: Outputs **relative coordinates** (0.0 to 1.0 range)
```python
# Example output: pyautogui.click(x=0.5, y=0.3)
# x=0.5 means 50% from left edge, y=0.3 means 30% from top edge
# Convert to absolute coordinates:
def qwen2_relative_to_absolute(rel_x, rel_y, original_width, original_height):
abs_x = int(rel_x * original_width)
abs_y = int(rel_y * original_height)
return abs_x, abs_y
```
- **OpenCUA-7B and OpenCUA-32B** (Qwen2.5-based): Output **absolute coordinates** after smart resize
```python
# Example output: pyautogui.click(x=960, y=324)
# These are coordinates on the smart-resized image, not the original image
# Convert to original image coordinates:
# Please refer to the smart_resize function in: https://github.com/huggingface/transformers/blob/67ddc82fbc7e52c6f42a395b4a6d278c55b77a39/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L55
def qwen25_smart_resize_to_absolute(model_x, model_y, original_width, original_height):
# First, calculate the smart-resized dimensions
resized_height, resized_width = smart_resize(original_height, original_width, factor = 28, min_pixels = 3136, max_pixels = 12845056)
# Convert model output to relative coordinates on original image
rel_x = model_x / resized_width
rel_y = model_y / resized_height
# Then convert to absolute coordinates on original image
abs_x = int(rel_x * original_width)
abs_y = int(rel_y * original_height)
return abs_x, abs_y
```
<div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;">
<strong>Understanding Smart Resize for Qwen2.5-based Models:</strong>
<p style="margin: 8px 0 0;">
The Qwen2.5-VL models use a “smart resize” preprocessing that maintains aspect ratio while fitting within pixel constraints.
For coordinate conversion, you need the smart resize function from the
<a href="https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60">
official Qwen2.5-VL implementation</a>.
</p>
</div>
## Citation
If you use OpenCUA models in your research, please cite our work:
```bibtex
@misc{wang2025opencuaopenfoundationscomputeruse,
title={OpenCUA: Open Foundations for Computer-Use Agents},
author={Xinyuan Wang and Bowen Wang and Dunjie Lu and Junlin Yang and Tianbao Xie and Junli Wang and Jiaqi Deng and Xiaole Guo and Yiheng Xu and Chen Henry Wu and Zhennan Shen and Zhuokai Li and Ryan Li and Xiaochuan Li and Junda Chen and Boyuan Zheng and Peihang Li and Fangyu Lei and Ruisheng Cao and Yeqiao Fu and Dongchan Shin and Martin Shin and Jiarui Hu and Yuyan Wang and Jixuan Chen and Yuxiao Ye and Danyang Zhang and Dikang Du and Hao Hu and Huarong Chen and Zaida Zhou and Haotian Yao and Ziwei Chen and Qizheng Gu and Yipu Wang and Heng Wang and Diyi Yang and Victor Zhong and Flood Sung and Y. Charles and Zhilin Yang and Tao Yu},
year={2025},
eprint={2508.09123},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.09123},
}
```
</div>
|
roeker/blockassist-bc-quick_wiry_owl_1755634184
|
roeker
| 2025-08-19T20:10:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:10:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755634081
|
Leoar
| 2025-08-19T20:10:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy toothy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:10:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy toothy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnjaliNV/WellBeing_LoRA
|
AnjaliNV
| 2025-08-19T20:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:44:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Huseyin/teknofest-2025-turkish-edu-v2
|
Huseyin
| 2025-08-19T20:09:22Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"turkish",
"education",
"teknofest-2025",
"qwen",
"text-generation",
"lora",
"conversational",
"tr",
"dataset:Huseyin/final2",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-19T20:00:50Z |
---
language: tr
license: apache-2.0
base_model: Qwen/Qwen3-8B
tags:
- turkish
- education
- teknofest-2025
- qwen
- text-generation
- lora
datasets:
- Huseyin/final2
pipeline_tag: text-generation
widget:
- text: "Türkçe eğitimi için yaratıcı bir etkinlik önerisi:"
example_title: "Eğitim Etkinliği"
- text: "İlkokul öğrencileri için matematik problemi:"
example_title: "Matematik"
- text: "Fen bilgisi dersinde yapılabilecek basit bir deney:"
example_title: "Fen Deneyi"
- text: "Öğrencilerin dikkatini çekmek için:"
example_title: "Dikkat Çekme"
model-index:
- name: teknofest-2025-turkish-edu-v2
results: []
---
# 🚀 TEKNOFEST 2025 - Türkçe Eğitim Modeli V2
Bu model **TEKNOFEST 2025 Eylem Temelli Türkçe Büyük Dil Modeli Yarışması** için geliştirilmiştir.
## ✨ Yenilikler (V2)
- ✅ Model weights düzeltildi
- ✅ Tokenizer uyumluluğu sağlandı
- ✅ Çıktı kalitesi iyileştirildi
- ✅ 16.4 GB optimized model
## 📋 Model Bilgileri
- **Base Model:** Qwen/Qwen3-8B
- **Fine-tuning:** LoRA Adapter
- **Model Boyutu:** 16.4 GB
- **Oluşturma Tarihi:** 2025-08-19 20:09
- **Dil:** Türkçe
- **Alan:** Eğitim Teknolojileri
- **Yarışma:** TEKNOFEST 2025
## 🎯 Kullanım Alanları
- 📚 Türkçe eğitim materyali oluşturma
- 👨🎓 Öğrenci seviyesine uygun içerik üretimi
- ❓ Soru-cevap sistemleri
- 📝 Eğitsel içerik özetleme
- 📅 Ders planı hazırlama
- 🎮 Eğitici oyun senaryoları
## 💻 Hızlı Başlangıç
### Kurulum
```bash
pip install transformers torch accelerate
```
### Kullanım
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model ve tokenizer yükle
model_name = "Huseyin/teknofest-2025-turkish-edu-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Örnek kullanım
prompt = "Türkçe eğitimi için yaratıcı bir etkinlik önerisi:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
do_sample=True,
top_p=0.95
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Google Colab'da Kullanım
```python
# Google Colab için optimized kullanım
!pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# 8-bit yükleme (düşük bellek kullanımı)
model_name = "Huseyin/teknofest-2025-turkish-edu-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_8bit=True,
device_map='auto'
)
# Test
prompt = "İlkokul öğrencileri için matematik etkinliği:"
inputs = tokenizer(prompt, return_tensors='pt')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 📊 Örnek Çıktılar
**Prompt:** "Türkçe eğitimi için bir öneri:"
**Cevap:** "Ders notlarına dikkat etmeyen ve okumayan öğrencilerin, dersi yoksayarak not almakta, ya da dersle ilgili sorular sormamaktadırlar..."
## 🏆 TEKNOFEST 2025
Bu model, TEKNOFEST 2025 **Eylem Temelli Türkçe Büyük Dil Modeli Yarışması** için özel olarak geliştirilmiştir.
## 📈 Performans
- Model başarıyla Türkçe eğitim içeriği üretmektedir
- Fine-tuning sonrası eğitim alanına özelleşmiştir
- Düzgün ve anlamlı Türkçe çıktılar vermektedir
## 📄 Lisans
Apache 2.0
## 🙏 Teşekkür
Bu modelin geliştirilmesinde emeği geçen herkese teşekkür ederiz.
---
*TEKNOFEST 2025 - Türkiye'nin Teknoloji Festivali* 🇹🇷
|
Muapi/super-hip-waist
|
Muapi
| 2025-08-19T20:07:18Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:07:09Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# super hip-waist

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:738727@826131", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/cinematic-glamour-photography-style-xl-f1d
|
Muapi
| 2025-08-19T20:07:00Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:06:36Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Cinematic Glamour Photography style XL + F1D

**Base model**: Flux.1 D
**Trained words**: Diffused , glowing , glamour photography style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:396060@1338701", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
gustavomr/llama-finnedtunned-qa
|
gustavomr
| 2025-08-19T20:06:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T20:25:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/zacharyreid-gr00t-Bimanual_4cam_MidAirHandoff-r2eu7
|
phospho-app
| 2025-08-19T20:06:09Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:zacharyreid/Bimanual_4cam_MidAirHandoff",
"region:us"
] |
robotics
| 2025-08-19T16:56:58Z |
---
datasets: zacharyreid/Bimanual_4cam_MidAirHandoff
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for
return fut.result()
^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1146, in read_output
async for line in process.stdout:
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__
val = await self.readline()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil
await self._wait_for_data('readuntil')
File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/phosphobot/am/gr00t.py", line 1157, in run_gr00t_training
await asyncio.wait_for(read_output(), timeout=timeout_seconds)
File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for
raise exceptions.TimeoutError() from exc
TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1325, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1162, in run_gr00t_training
raise TimeoutError(
TimeoutError: Training process exceeded timeout of 10800 seconds. Please consider lowering the number of epochs and/or batch size.
```
## Training parameters:
- **Dataset**: [zacharyreid/Bimanual_4cam_MidAirHandoff](https://huggingface.co/datasets/zacharyreid/Bimanual_4cam_MidAirHandoff)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 32
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755632296
|
kojeklollipop
| 2025-08-19T20:05:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:05:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_swedish_immigration4
|
AnonymousCS
| 2025-08-19T20:04:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T20:01:17Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_swedish_immigration4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_swedish_immigration4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4123
- Accuracy: 0.8692
- 1-f1: 0.8090
- 1-recall: 0.8372
- 1-precision: 0.7826
- Balanced Acc: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3763 | 1.0 | 5 | 0.3371 | 0.8692 | 0.7792 | 0.6977 | 0.8824 | 0.8258 |
| 0.241 | 2.0 | 10 | 0.4029 | 0.8692 | 0.8046 | 0.8140 | 0.7955 | 0.8553 |
| 0.2721 | 3.0 | 15 | 0.4123 | 0.8692 | 0.8090 | 0.8372 | 0.7826 | 0.8611 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
dfg1d2/sbi-gpt
|
dfg1d2
| 2025-08-19T20:04:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T20:04:08Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dfg1d2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/wizard-s-scrap-yard-supermarionation-puppets
|
Muapi
| 2025-08-19T20:03:55Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:03:34Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wizard's Scrap Yard: Supermarionation Puppets

**Base model**: Flux.1 D
**Trained words**: Thunderbirds Puppet, Puppet
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:694054@817429", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755633777
|
Vasya777
| 2025-08-19T20:03:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T20:03:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/colorful-detailer-semifluid-pigments-flux-sd-3.5m-sd-3.5l
|
Muapi
| 2025-08-19T20:01:10Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T20:00:56Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# colorful detailer | semifluid pigments (Flux & SD 3.5M & SD 3.5L)

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:757175@846653", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
unitova/blockassist-bc-zealous_sneaky_raven_1755626317
|
unitova
| 2025-08-19T18:26:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:26:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755627918
|
Dejiat
| 2025-08-19T18:26:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:25:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755627829
|
Ferdi3425
| 2025-08-19T18:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:24:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755627810
|
roeker
| 2025-08-19T18:24:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:24:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755627800
|
Vasya777
| 2025-08-19T18:23:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:23:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sophie-Rain-Spider-man-Video-Tutori-a-l/Sophie.Rain.Spiderman.Video.Tutorial.Oficial
|
Sophie-Rain-Spider-man-Video-Tutori-a-l
| 2025-08-19T18:23:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T18:22:07Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Reynier/modernbert-dga-detector
|
Reynier
| 2025-08-19T18:23:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"domain-generation-algorithm",
"cybersecurity",
"domain-classification",
"security",
"malware-detection",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T18:22:40Z |
---
license: apache-2.0
tags:
- domain-generation-algorithm
- cybersecurity
- domain-classification
- security
- malware-detection
language:
- en
library_name: transformers
pipeline_tag: text-classification
base_model: answerdotai/ModernBERT-base
---
# ModernBERT DGA Detector
This model is designed to classify domains as either legitimate or generated by Domain Generation Algorithms (DGA).
## Model Description
- **Model Type:** BERT-based sequence classification
- **Task:** Binary classification (Legitimate vs DGA domains)
- **Base Model:** ModernBERT-base
- **Training Data:** Domain names dataset
- **Author:** Reynier Leyva La O
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Reynier/modernbert-dga-detector")
model = AutoModelForSequenceClassification.from_pretrained("Reynier/modernbert-dga-detector")
# Example prediction
def predict_domain(domain):
inputs = tokenizer(domain, return_tensors="pt", max_length=64, truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.softmax(outputs.logits, dim=-1)
legit_prob = predictions[0][0].item()
dga_prob = predictions[0][1].item()
return {"prediction": "DGA" if dga_prob > legit_prob else "LEGITIMATE",
"confidence": max(legit_prob, dga_prob)}
# Test examples
domains = ["google.com", "xkvbzpqr.net", "facebook.com", "abcdef123456.com"]
for domain in domains:
result = predict_domain(domain)
print(f"{domain} -> {result['prediction']} (confidence: {result['confidence']:.3f})")
```
## Model Architecture
The model is based on ModernBERT and fine-tuned for domain classification:
- Input: Domain names (text)
- Output: Binary classification (0=Legitimate, 1=DGA)
- Max sequence length: 64 tokens
## Training Details
This model was fine-tuned on a dataset of legitimate and DGA-generated domains using:
- Base model: answerdotai/ModernBERT-base
- Framework: Transformers/PyTorch
- Task: Binary sequence classification
## Performance
Add your model's performance metrics here when available:
- Accuracy: [Add your results]
- Precision: [Add your results]
- Recall: [Add your results]
- F1-Score: [Add your results]
## Use Cases
- **Cybersecurity**: Detect malicious domains generated by malware
- **Network Security**: Filter potentially harmful domains
- **Threat Intelligence**: Analyze domain patterns in security feeds
## Limitations
- This model is trained specifically for domain classification
- Performance may vary on domains from different TLDs or languages
- Regular retraining may be needed as DGA techniques evolve
- Model performance depends on the quality and diversity of training data
## Citation
If you use this model in your research or applications, please cite it appropriately.
## Related Models
Check out the author's other security models:
- [Llama3_8B-DGA-Detector](https://huggingface.co/Reynier/Llama3_8B-DGA-Detector)
|
VIDEOS-18-Dr-Eman-viral-video-Clips/New.full.videos.Dr.Eman.Viral.Video.Official.Tutorial
|
VIDEOS-18-Dr-Eman-viral-video-Clips
| 2025-08-19T18:23:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T18:22:46Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755626307
|
lisaozill03
| 2025-08-19T18:22:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:22:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755626126
|
mang3dd
| 2025-08-19T18:22:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:22:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NESTLAYER/Sombrero-charro
|
NESTLAYER
| 2025-08-19T18:22:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T18:22:37Z |
---
license: apache-2.0
---
|
Kandru/fine_tune_classification_task
|
Kandru
| 2025-08-19T18:19:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T18:17:24Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fine_tune_classification_task
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tune_classification_task
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3024
- Accuracy: 0.9193
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 313 | 0.2866 | 0.9162 | 0.9160 |
| 0.1256 | 2.0 | 626 | 0.3024 | 0.9193 | 0.9193 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755625475
|
katanyasekolah
| 2025-08-19T18:13:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:13:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks-rl
|
TAUR-dev
| 2025-08-19T18:13:30Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-08-19T18:11:52Z |
---
language: en
license: mit
---
# M-SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-SBON_skills_in_rl_v2__1e6_all_tasks_sft-rl_all_tasks-rl")
```
|
VIDEOS-19-tasnim-jara-viral-video-link/New.full.videos.tasnim.jara.Viral.Video.Official.Tutorial
|
VIDEOS-19-tasnim-jara-viral-video-link
| 2025-08-19T18:13:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T18:12:57Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755625509
|
kojeklollipop
| 2025-08-19T18:12:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:12:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Shujashark/llm-sql-t5-small-lora-adapter
|
Shujashark
| 2025-08-19T18:12:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:t5-small",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google-t5/t5-small",
"base_model:adapter:google-t5/t5-small",
"region:us"
] | null | 2025-08-19T18:08:48Z |
---
base_model: t5-small
library_name: peft
tags:
- base_model:adapter:t5-small
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
aumoai/aumogpt-Qwen2.5-32B-Instruct-lora
|
aumoai
| 2025-08-19T18:12:14Z | 67 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-05-08T15:53:49Z |
model:
model_name: "Qwen/Qwen2.5-32B-Instruct"
model_max_length: 4096
torch_dtype_str: "bfloat16"
attn_implementation: "flash_attention_2" #"sdpa"
load_pretrained_weights: True
trust_remote_code: True
data:
train:
datasets:
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumo_dataset_test.json"
# shuffle: True
# seed: 42
- dataset_name: "text_sft"
dataset_path: "datasets/aumogpt_qwen32b.json"
shuffle: True
seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/xp3_qwen_2000.json"
# shuffle: True
# seed: 42
# - dataset_name: "text_sft"
# dataset_path: "datasets/aumogpt_train.json"
# shuffle: True
# seed: 42
# mixture_strategy: "all_exhausted" # Strategy for mixing datasets
# seed: 123456789426465
validation:
datasets:
- dataset_name: "text_sft"
dataset_path: "datasets/aumo_dataset_test.json"
# split: "validation"
# sample_count: 10
training:
trainer_type: "TRL_SFT"
use_peft: True
save_steps: 200
num_train_epochs: 2
per_device_train_batch_size: 2
per_device_eval_batch_size: 2
gradient_accumulation_steps: 8
max_grad_norm: null
enable_gradient_checkpointing: True
gradient_checkpointing_kwargs:
use_reentrant: False
ddp_find_unused_parameters: False
optimizer: "adamw_torch" # "adamw_torch" #paged_adamw_8bit
learning_rate: 5.0e-4
warmup_steps: 10
weight_decay: 0.01
compile: False
dataloader_num_workers: 8
dataloader_prefetch_factor: 4
logging_steps: 10
log_model_summary: False
empty_device_cache_steps: 50
output_dir: "results/oumi/qwen32b_xp3_aumo.lora"
include_performance_metrics: True
enable_wandb: True
eval_strategy: "steps" # When to evaluate ("no", "steps", "epoch")
eval_steps: 25
peft:
q_lora: False
lora_r: 64
lora_alpha: 32
lora_dropout: 0.2
lora_target_modules:
- "q_proj"
- "k_proj"
- "v_proj"
- "o_proj"
- "gate_proj"
- "down_proj"
- "up_proj"
fsdp:
enable_fsdp: True
sharding_strategy: FULL_SHARD
auto_wrap_policy: TRANSFORMER_BASED_WRAP
# transformer_layer_cls: QwenBlock
forward_prefetch: true
|
matboz/ring-model
|
matboz
| 2025-08-19T18:11:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/gemma-2-27b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:google/gemma-2-27b-it",
"region:us"
] |
text-generation
| 2025-08-19T17:54:47Z |
---
base_model: google/gemma-2-27b-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-2-27b-it
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF
|
siro-kr
| 2025-08-19T18:11:43Z | 0 | 0 | null |
[
"gguf",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"harmful",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"base_model:AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts",
"base_model:quantized:AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T18:11:06Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- harmful
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
- llama-cpp
- gguf-my-repo
base_model: AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts
---
# siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF
This model was converted to GGUF format from [`AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts`](https://huggingface.co/AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF --hf-file gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF --hf-file gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF --hf-file gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo siro-kr/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-Q4_K_M-GGUF --hf-file gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts-q4_k_m.gguf -c 2048
```
|
mohda/blockassist-bc-regal_fierce_hummingbird_1755627037
|
mohda
| 2025-08-19T18:11:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:11:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RTannous/metallama-finetuned
|
RTannous
| 2025-08-19T18:10:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T18:08:19Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RTannous
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755625235
|
coelacanthxyz
| 2025-08-19T18:10:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:10:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ver-filtrado-video-abigail-lalama-snayder/VIRAL.VER.filtrado.video.de.abigail.lalama.y.snayder.influencer.se.hace.viral.en.redes.sociales
|
Ver-filtrado-video-abigail-lalama-snayder
| 2025-08-19T18:08:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T18:07:38Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Abigail Lalama viral: influencer confirma video con Snayder en Telegram
El nombre Abigail Lalama se hace viral y crece en Telegram y X tras la filtración de su video con Snayder. Te contamos quiénes son y qué pasó.
Imagen de Abigail Lalama viral: influencer confirma video con Snayder en Telegram
Abigail confirmó la filtración del contenido. - Foto: Instagram abigail_lalama
El nombre Abigail Lalama se hizo viral en redes sociales como Telegram y Twitter (ahora X) luego de que se confirmara la filtración de un video íntimo con Snayder. Esto provocó un repunte de búsquedas como “Abigail Lalama video filtrado”, “Video de Abigail y Snayder filtrado”, “Abigail Lalama Telegram”, entre otras.
La confirmación llegó directamente de la influencer, lo que desencadenó una ola de comentarios, reacciones y solidaridad en línea. A continuación, te contamos quién es Abigail Lalama, quién es Snayder, qué dijo ella, y por qué este caso se viralizó tan rápido.
LEA TAMBIÉN: ¿Qué dijo Mariana Botas de Drake Bell?, ¿qué contestó el cantante?
View post on TikTok
¿Quién es Abigail Lalama y por qué es conocida?
Abigail Lalama es una creadora de contenido ecuatoriana de 22 años, originaria de Guayaquil, que ha ganado popularidad principalmente en TikTok, donde comparte lives, retos, momentos familiares y contenidos cotidianos junto a su hermana gemela, Génesis.
Su comunidad se ha consolidado gracias a su cercanía, su carisma y su estilo familiar. En TikTok, su cuenta @laoficialabigail cuenta con más de 400 000 seguidores y en Instagram supera los 173,000. Con su hermana, forma el ‘Team Lalama’, difundiendo contenidos centrados en diario vivir, maternidad y lazos familiares.
¿Quién es Snayder y qué video se filtró?
La identidad precisa de Snayder no ha sido desvelada en los medios principales consultados hasta el momento, pero se sabe que es un conocido en el entorno de Abigail, supuestamente su expareja.
El video filtrado, descrito como íntimo, fue subido a plataformas como Telegram y TikTok. Usuarios reportaron que la joven que aparece en la grabación viral compartía tatuajes y rasgos con Abigail Lalama. Según Abigail, el video circuló sin su consentimiento, y la filtración se la atribuye a dicho ex.
View post on TikTok
¿Qué dijo Abigail Lalama sobre la filtración?
En un video en vivo, visiblemente afectada y entre lágrimas, Abigail Lalama confirmó la filtración viral del contenido. Ella acusó directamente a su expareja de haber difundido ese material íntimo con la intención de incomodar a su nueva relación.
Sus palabras fueron duras y reflejaron desilusión y dolor: “Eso es lo peor que le puede hacer un hombre… Yo fui, sí, una estúpida en dejarme grabar con él… Pues siendo mi pareja coge y lo sube… el sinvergüenza. No tiene sangre en la cara”.
También añadió una reflexión sobre confianza y peor traición: “Ahí es cuando uno se da cuenta con quién duerme, ahí es cuando se sacan la máscara”. Posteriormente, compartió en sus historias videos de fans que la apoyaban, junto a clips con actitud más serena, mostrando resiliencia frente a la situación.
¿Qué dice la ley sobre contenido filtrado sin consentimiento?
En Ecuador, compartir contenido íntimo sin consentimiento está penalizado por el Código Orgánico Integral Penal (COIP). El artículo 178 castiga la difusión de imágenes o videos íntimos sin autorización con penas de uno a tres años. Además, el artículo 230, relativo a violencia digital, puede elevar la sanción hasta cuatro años si hay intención de dañar o humillar.
Abigail tiene la opción de denunciar ante la Fiscalía General del Estado, la Unidad de Policía Comunitaria (UPC), el portal del Consejo de la Judicatura o acudir a la Defensoría del Pueblo y organizaciones que brindan apoyo psicológico y asesoría legal. También se pueden solicitar medidas como la eliminación del contenido y reparación integral.
|
policia-instagram-influencer-virals-video/Hot.New.full.videos.policia.instagram.influencer.viral.video.Official.Tutorial
|
policia-instagram-influencer-virals-video
| 2025-08-19T18:07:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T18:06:53Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lautan/blockassist-bc-gentle_patterned_goat_1755624839
|
lautan
| 2025-08-19T18:02:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T18:02:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MidnightRunner/MDNT_Illus_3D
|
MidnightRunner
| 2025-08-19T17:59:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"SDXL",
"mdnt-illus",
"3D-hybrid",
"anaglyph",
"photoreal",
"cinematic",
"text-to-image",
"ComfyUI",
"Automatic1111",
"en",
"base_model:MidnightRunner/MDNT_Illus",
"base_model:finetune:MidnightRunner/MDNT_Illus",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-08-19T14:50:48Z |
---
license: creativeml-openrail-m
language:
- en
base_model:
- OnomaAIResearch/Illustrious-XL-v2.0
- MidnightRunner/MDNT_Illus
tags:
- SDXL
- mdnt-illus
- 3D-hybrid
- anaglyph
- photoreal
- cinematic
- text-to-image
- ComfyUI
- Automatic1111
- diffusers
pipeline_tag: text-to-image
library_name: diffusers
metrics:
- FID
- IS
widget:
- text: >-
(masterpiece), extremely aesthetic, newest, very vibrant colors, (ultra-HD),
(cinematic lighting), (photorealistic), high detail, depth of field, best
quality, absurdres,
parameters:
negative_prompt: >-
bad hands, extra digits, (multiple views:1.1), (bad:1.05), fewer, extra,
missing, worst quality, jpeg artifacts, bad quality, watermark,
unfinished, displeasing, sepia, sketch, flat color, signature, artistic
error, username, scan, (blurry, lowres, worst quality, (low quality:1.1),
ugly, (bad anatomy:1.05), artist name, (patreon username:1.2)
output:
url: mdnt_illus_3d_sample.jpeg
---
# MDNT_Illus_3D
Model type: diffusion-based text-to-image
Base model: Illustrious XL v2.0
Merged with: MIDNIGHT Illustrious, MDNT_Illus, Hyphorias, Nova3D-CGXL, BetterDaysIllus
License: CreativeML Open RAIL++-M
## Model description
MDNT_Illus_3D is a precision finetune focused on a 3D-hybrid aesthetic, balancing photoreal fidelity with simulated depth and sculpted form. It emphasizes anaglyphic layering, volumetric lighting, cinematic depth of field, and richly detailed textures to produce imagery that feels tactile, immersive, and dramatically lit.
## Usage recommendations
### Sampling methods
- Euler A (Euler ancestral)
- DPM++ 2M Karras
- DPM++ 2M SDE Karras
- DPM++ 3M SDE Exponential
### Settings
- Steps: 25–45
- CFG scale: 4 (range 3–4)
- Clip skip: 1
### Workflow
Compatible with ComfyUI and Automatic1111. A tailored ComfyUI workflow may be added later to maximize spatial layering and volumetric light behavior.
## Prompt guidance
Positive (example)
```
realistic, photorealistic, very aesthetic, best quality, absurdres, masterpiece,
amazing quality, newest, scenery, depth of field, high-resolution, high definition,
visually intense anaglyphic experience, volumetric lighting, cinematic, sculpted 3D form
```
Negative (example)
```
bad hands, extra digits, (multiple views:1.1), (bad:1.05), fewer, extra, missing,
worst quality, jpeg artifacts, watermark, unfinished, sketch, flat color, signature,
artist name, blurry, lowres, (bad anatomy:1.05), (patreon username:1.2)
```
## Version changes / notes
- v1.0 (initial release)
- Introduces 3D-hybrid realism with anaglyphic depth and volumetric lighting
- Blended with Hyphorias, Nova 3DCG XL, BetterDaysIllus for expanded range
## Acknowledgments
Base: Illustrious XL v2.0
Merges: MIDNIGHT Illustrious, MDNT_Illus, Hyphorias, Nova 3DCG XL, BetterDaysIllus
## Additional Resources
- **Creative Solutions and Services:** [Magnabos.co](https://magnabos.co/)
## License
This model is licensed under the [CreativeML Open RAIL++-M License](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). By using this model, you agree to the terms and conditions outlined in the license.
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755624690
|
thanobidex
| 2025-08-19T17:58:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:58:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giacomoarienti/orpheus-3b-ita-male-full
|
giacomoarienti
| 2025-08-19T17:55:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:54:42Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** giacomoarienti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755624471
|
quantumxnode
| 2025-08-19T17:55:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:54:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755624371
|
vwzyrraz7l
| 2025-08-19T17:54:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:54:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755626034
|
Dejiat
| 2025-08-19T17:54:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:54:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755624535
|
lisaozill03
| 2025-08-19T17:53:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:53:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEOS-19-afrin-apu-viral-link/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
|
VIDEOS-19-afrin-apu-viral-link
| 2025-08-19T17:51:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:51:41Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755624582
|
Sayemahsjn
| 2025-08-19T17:51:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:51:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755624343
|
mang3dd
| 2025-08-19T17:51:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:51:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755624249
|
helmutsukocok
| 2025-08-19T17:50:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:50:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755625775
|
Dejiat
| 2025-08-19T17:50:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:50:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AppliedLucent/nemo-phase6
|
AppliedLucent
| 2025-08-19T17:49:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:AppliedLucent/nemo-phase5",
"base_model:finetune:AppliedLucent/nemo-phase5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:38:35Z |
---
base_model: AppliedLucent/nemo-phase5
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AppliedLucent
- **License:** apache-2.0
- **Finetuned from model :** AppliedLucent/nemo-phase5
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_mis_run2_gen2_WXS_doc1000_synt64_lr1e-04_acm_LANG
|
dgambettaphd
| 2025-08-19T17:48:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:48:32Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ColaChameleon/lilly
|
ColaChameleon
| 2025-08-19T17:48:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-19T17:46:20Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/_8eb450b2-aa12-4d00-9906-e7743395311c.jpg
text: '-'
- output:
url: images/df0r49j-37a375fe-e811-4702-9278-d7e062d15f18.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# lilly
<Gallery />
## Download model
[Download](/ColaChameleon/lilly/tree/main) them in the Files & versions tab.
|
ngozimagen/ngozi-lora
|
ngozimagen
| 2025-08-19T17:46:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T16:59:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ngozi
---
# Ngozi Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ngozi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ngozi",
"lora_weights": "https://huggingface.co/ngozimagen/ngozi-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ngozimagen/ngozi-lora', weight_name='lora.safetensors')
image = pipeline('ngozi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ngozimagen/ngozi-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
arka7/Llama-3.2-3B-Instruct-bnb-4bit-rag-finetuned-with-DPO
|
arka7
| 2025-08-19T17:45:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:45:17Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** arka7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
beyoru/MuoQVAgent-1.0
|
beyoru
| 2025-08-19T17:44:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:42:04Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** beyoru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Timia123/simpo_inpo_iter2_aug19
|
Timia123
| 2025-08-19T17:43:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"alignment-handbook",
"inpo",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:40:20Z |
---
library_name: transformers
base_model: google/gemma-2-9b-it
tags:
- alignment-handbook
- inpo
- generated_from_trainer
model-index:
- name: gemma-2-9b-it_inpo_stage_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-9b-it_inpo_stage_2
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co//home/hubing/SimPO/outputs/gemma-2-9b-it_inpo_stage_1/) on the data/inpo_iter2/pref dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
phospho-app/Deimos252-gr00t-Light_dataset_deimos-p48ab
|
phospho-app
| 2025-08-19T17:43:37Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:Deimos252/Light_dataset_deimos",
"region:us"
] |
robotics
| 2025-08-19T17:13:51Z |
---
datasets: Deimos252/Light_dataset_deimos
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [Deimos252/Light_dataset_deimos](https://huggingface.co/datasets/Deimos252/Light_dataset_deimos)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
VIDEOS-19-Afrin-Er-Viral-Video-Clip/New.full.videos.Afrin.Er.Viral.Video.Official.Tutorial
|
VIDEOS-19-Afrin-Er-Viral-Video-Clip
| 2025-08-19T17:43:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:42:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
numind/NuExtract-2.0-8B-GPTQ
|
numind
| 2025-08-19T17:42:43Z | 346 | 4 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"image-text-to-text",
"conversational",
"base_model:numind/NuExtract-2.0-8B",
"base_model:quantized:numind/NuExtract-2.0-8B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
image-text-to-text
| 2025-06-06T08:38:54Z |
---
library_name: transformers
license: mit
base_model:
- numind/NuExtract-2.0-8B
pipeline_tag: image-text-to-text
---
<p align="center">
<a href="https://nuextract.ai/">
<img src="logo_nuextract.svg" width="200"/>
</a>
</p>
<p align="center">
🖥️ <a href="https://nuextract.ai/">API / Platform</a>   |   📑 <a href="https://numind.ai/blog">Blog</a>   |   🗣️ <a href="https://discord.gg/3tsEtJNCDe">Discord</a>
</p>
# NuExtract 2.0 8B by NuMind 🔥
NuExtract 2.0 is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.
We provide several versions of different sizes, all based on pre-trained models from the QwenVL family.
| Model Size | Model Name | Base Model | License | Huggingface Link |
|------------|------------|------------|---------|------------------|
| 2B | NuExtract-2.0-2B | [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) | MIT | 🤗 [NuExtract-2.0-2B](https://huggingface.co/numind/NuExtract-2.0-2B) |
| 4B | NuExtract-2.0-4B | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen Research License | 🤗 [NuExtract-2.0-4B](https://huggingface.co/numind/NuExtract-2.0-4B) |
| 8B | NuExtract-2.0-8B | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | MIT | 🤗 [NuExtract-2.0-8B](https://huggingface.co/numind/NuExtract-2.0-8B) |
❗️Note: `NuExtract-2.0-2B` is based on Qwen2-VL rather than Qwen2.5-VL because the smallest Qwen2.5-VL model (3B) has a more restrictive, non-commercial license. We therefore include `NuExtract-2.0-2B` as a small model option that can be used commercially.
## Benchmark
Performance on collection of ~1,000 diverse extraction examples containing both text and image inputs.
<a href="https://nuextract.ai/">
<img src="nuextract2_bench.png" width="500"/>
</a>
## Overview
To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.
Support types include:
* `verbatim-string` - instructs the model to extract text that is present verbatim in the input.
* `string` - a generic string field that can incorporate paraphrasing/abstraction.
* `integer` - a whole number.
* `number` - a whole or decimal number.
* `date-time` - ISO formatted date.
* Array of any of the above types (e.g. `["string"]`)
* `enum` - a choice from set of possible answers (represented in template as an array of options, e.g. `["yes", "no", "maybe"]`).
* `multi-label` - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. `[["A", "B", "C"]]`).
If the model does not identify relevant information for a field, it will return `null` or `[]` (for arrays and multi-labels).
The following is an example template:
```json
{
"first_name": "verbatim-string",
"last_name": "verbatim-string",
"description": "string",
"age": "integer",
"gpa": "number",
"birth_date": "date-time",
"nationality": ["France", "England", "Japan", "USA", "China"],
"languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
}
```
An example output:
```json
{
"first_name": "Susan",
"last_name": "Smith",
"description": "A student studying computer science.",
"age": 20,
"gpa": 3.7,
"birth_date": "2005-03-01",
"nationality": "England",
"languages_spoken": ["English", "French"]
}
```
⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.
## Using NuExtract with 🤗 Transformers
```python
import torch
from transformers import AutoProcessor
from gptqmodel import GPTQModel
model_name = "numind/NuExtract-2.0-8B-GPTQ"
# model_name = "numind/NuExtract-2.0-4B-GPTQ"
model = GPTQModel.load(model_name)
processor = AutoProcessor.from_pretrained(model_name,
trust_remote_code=True,
padding_side='left',
use_fast=True)
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained(model_name, min_pixels=min_pixels, max_pixels=max_pixels)
```
You will need the following function to handle loading of image input data:
```python
def process_all_vision_info(messages, examples=None):
"""
Process vision information from both messages and in-context examples, supporting batch processing.
Args:
messages: List of message dictionaries (single input) OR list of message lists (batch input)
examples: Optional list of example dictionaries (single input) OR list of example lists (batch)
Returns:
A flat list of all images in the correct order:
- For single input: example images followed by message images
- For batch input: interleaved as (item1 examples, item1 input, item2 examples, item2 input, etc.)
- Returns None if no images were found
"""
from qwen_vl_utils import process_vision_info, fetch_image
# Helper function to extract images from examples
def extract_example_images(example_item):
if not example_item:
return []
# Handle both list of examples and single example
examples_to_process = example_item if isinstance(example_item, list) else [example_item]
images = []
for example in examples_to_process:
if isinstance(example.get('input'), dict) and example['input'].get('type') == 'image':
images.append(fetch_image(example['input']))
return images
# Normalize inputs to always be batched format
is_batch = messages and isinstance(messages[0], list)
messages_batch = messages if is_batch else [messages]
is_batch_examples = examples and isinstance(examples, list) and (isinstance(examples[0], list) or examples[0] is None)
examples_batch = examples if is_batch_examples else ([examples] if examples is not None else None)
# Ensure examples batch matches messages batch if provided
if examples and len(examples_batch) != len(messages_batch):
if not is_batch and len(examples_batch) == 1:
# Single example set for a single input is fine
pass
else:
raise ValueError("Examples batch length must match messages batch length")
# Process all inputs, maintaining correct order
all_images = []
for i, message_group in enumerate(messages_batch):
# Get example images for this input
if examples and i < len(examples_batch):
input_example_images = extract_example_images(examples_batch[i])
all_images.extend(input_example_images)
# Get message images for this input
input_message_images = process_vision_info(message_group)[0] or []
all_images.extend(input_message_images)
return all_images if all_images else None
```
E.g. To perform a basic extraction of names from a text document:
```python
template = """{"names": ["string"]}"""
document = "John went to the restaurant with Mary. James went to the cinema."
# prepare the user message content
messages = [{"role": "user", "content": document}]
text = processor.tokenizer.apply_chat_template(
messages,
template=template, # template is specified here
tokenize=False,
add_generation_prompt=True,
)
print(text)
""""<|im_start|>user
# Template:
{"names": ["string"]}
# Context:
John went to the restaurant with Mary. James went to the cinema.<|im_end|>
<|im_start|>assistant"""
image_inputs = process_all_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
# we choose greedy sampling here, which works well for most information extraction tasks
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
# Inference: Generation of the output
generated_ids = model.generate(
**inputs,
**generation_config
)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
# ['{"names": ["John", "Mary", "James"]}']
```
<details>
<summary>In-Context Examples</summary>
Sometimes the model might not perform as well as we want because our task is challenging or involves some degree of ambiguity. Alternatively, we may want the model to follow some specific formatting, or just give it a bit more help. In cases like this it can be valuable to provide "in-context examples" to help NuExtract better understand the task.
To do so, we can provide a list examples (dictionaries of input/output pairs). In the example below, we show to the model that we want the extracted names to be in captial letters with `-` on either side (for the sake of illustration). Usually providing multiple examples will lead to better results.
```python
template = """{"names": ["string"]}"""
document = "John went to the restaurant with Mary. James went to the cinema."
examples = [
{
"input": "Stephen is the manager at Susan's store.",
"output": """{"names": ["-STEPHEN-", "-SUSAN-"]}"""
}
]
messages = [{"role": "user", "content": document}]
text = processor.tokenizer.apply_chat_template(
messages,
template=template,
examples=examples, # examples provided here
tokenize=False,
add_generation_prompt=True,
)
image_inputs = process_all_vision_info(messages, examples)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
# we choose greedy sampling here, which works well for most information extraction tasks
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
# Inference: Generation of the output
generated_ids = model.generate(
**inputs,
**generation_config
)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
# ['{"names": ["-JOHN-", "-MARY-", "-JAMES-"]}']
```
</details>
<details>
<summary>Image Inputs</summary>
If we want to give image inputs to NuExtract, instead of text, we simply provide a dictionary specifying the desired image file as the message content, instead of a string. (e.g. `{"type": "image", "image": "file://image.jpg"}`).
You can also specify an image URL (e.g. `{"type": "image", "image": "http://path/to/your/image.jpg"}`) or base64 encoding (e.g. `{"type": "image", "image": "data:image;base64,/9j/..."}`).
```python
template = """{"store": "verbatim-string"}"""
document = {"type": "image", "image": "file://1.jpg"}
messages = [{"role": "user", "content": [document]}]
text = processor.tokenizer.apply_chat_template(
messages,
template=template,
tokenize=False,
add_generation_prompt=True,
)
image_inputs = process_all_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
# Inference: Generation of the output
generated_ids = model.generate(
**inputs,
**generation_config
)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
# ['{"store": "Trader Joe\'s"}']
```
</details>
<details>
<summary>Batch Inference</summary>
```python
inputs = [
# image input with no ICL examples
{
"document": {"type": "image", "image": "file://0.jpg"},
"template": """{"store_name": "verbatim-string"}""",
},
# image input with 1 ICL example
{
"document": {"type": "image", "image": "file://0.jpg"},
"template": """{"store_name": "verbatim-string"}""",
"examples": [
{
"input": {"type": "image", "image": "file://1.jpg"},
"output": """{"store_name": "Trader Joe's"}""",
}
],
},
# text input with no ICL examples
{
"document": {"type": "text", "text": "John went to the restaurant with Mary. James went to the cinema."},
"template": """{"names": ["string"]}""",
},
# text input with ICL example
{
"document": {"type": "text", "text": "John went to the restaurant with Mary. James went to the cinema."},
"template": """{"names": ["string"]}""",
"examples": [
{
"input": "Stephen is the manager at Susan's store.",
"output": """{"names": ["STEPHEN", "SUSAN"]}"""
}
],
},
]
# messages should be a list of lists for batch processing
messages = [
[
{
"role": "user",
"content": [x['document']],
}
]
for x in inputs
]
# apply chat template to each example individually
texts = [
processor.tokenizer.apply_chat_template(
messages[i], # Now this is a list containing one message
template=x['template'],
examples=x.get('examples', None),
tokenize=False,
add_generation_prompt=True)
for i, x in enumerate(inputs)
]
image_inputs = process_all_vision_info(messages, [x.get('examples') for x in inputs])
inputs = processor(
text=texts,
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
# Batch Inference
generated_ids = model.generate(**inputs, **generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
for y in output_texts:
print(y)
# {"store_name": "WAL-MART"}
# {"store_name": "Walmart"}
# {"names": ["John", "Mary", "James"]}
# {"names": ["JOHN", "MARY", "JAMES"]}
```
</details>
<details>
<summary>Template Generation</summary>
If you want to convert existing schema files you have in other formats (e.g. XML, YAML, etc.) or start from an example, NuExtract 2.0 models can automatically generate this for you.
E.g. convert XML into a NuExtract template:
```python
xml_template = """<SportResult>
<Date></Date>
<Sport></Sport>
<Venue></Venue>
<HomeTeam></HomeTeam>
<AwayTeam></AwayTeam>
<HomeScore></HomeScore>
<AwayScore></AwayScore>
<TopScorer></TopScorer>
</SportResult>"""
messages = [
{
"role": "user",
"content": [{"type": "text", "text": xml_template}],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True,
)
image_inputs = process_all_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(
**inputs,
**generation_config
)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
# {
# "Date": "date-time",
# "Sport": "verbatim-string",
# "Venue": "verbatim-string",
# "HomeTeam": "verbatim-string",
# "AwayTeam": "verbatim-string",
# "HomeScore": "integer",
# "AwayScore": "integer",
# "TopScorer": "verbatim-string"
# }
```
E.g. generate a template from natural language description:
```python
description = "I would like to extract important details from the contract."
messages = [
{
"role": "user",
"content": [{"type": "text", "text": description}],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True,
)
image_inputs = process_all_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(
**inputs,
**generation_config
)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
# {
# "Contract": {
# "Title": "verbatim-string",
# "Description": "verbatim-string",
# "Terms": [
# {
# "Term": "verbatim-string",
# "Description": "verbatim-string"
# }
# ],
# "Date": "date-time",
# "Signatory": "verbatim-string"
# }
# }
```
</details>
## Fine-Tuning
You can find a fine-tuning tutorial notebook in the [cookbooks](https://github.com/numindai/nuextract/tree/main/cookbooks) folder of the [GitHub repo](https://github.com/numindai/nuextract/tree/main).
## vLLM Deployment
Run the command below to serve an OpenAI-compatible API:
```bash
vllm serve numind/NuExtract-2.0-8B --trust_remote_code --limit-mm-per-prompt image=6 --chat-template-content-format openai
```
If you encounter memory issues, set `--max-model-len` accordingly.
Send requests to the model as follows:
```python
import json
from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="numind/NuExtract-2.0-8B",
temperature=0,
messages=[
{
"role": "user",
"content": [{"type": "text", "text": "Yesterday I went shopping at Bunnings"}],
},
],
extra_body={
"chat_template_kwargs": {
"template": json.dumps(json.loads("""{\"store\": \"verbatim-string\"}"""), indent=4)
},
}
)
print("Chat response:", chat_response)
```
For image inputs, structure requests as shown below. Make sure to order the images in `"content"` as they appear in the prompt (i.e. any in-context examples before the main input).
```python
import base64
def encode_image(image_path):
"""
Encode the image file to base64 string
"""
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
base64_image = encode_image("0.jpg")
base64_image2 = encode_image("1.jpg")
chat_response = client.chat.completions.create(
model="numind/NuExtract-2.0-8B",
temperature=0,
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}, # first ICL example image
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image2}"}}, # real input image
],
},
],
extra_body={
"chat_template_kwargs": {
"template": json.dumps(json.loads("""{\"store\": \"verbatim-string\"}"""), indent=4),
"examples": [
{
"input": "<image>",
"output": """{\"store\": \"Walmart\"}"""
}
]
},
}
)
print("Chat response:", chat_response)
```
|
thailevann/track8_subtask1_PoT
|
thailevann
| 2025-08-19T17:42:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:42:06Z |
---
base_model: unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thailevann
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yookty/blockassist-bc-whistling_exotic_chicken_1755625296
|
yookty
| 2025-08-19T17:41:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling exotic chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:41:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling exotic chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dharshaneshwaran/MultimodalDeepfakeDetector
|
Dharshaneshwaran
| 2025-08-19T17:41:22Z | 0 | 0 | null |
[
"arxiv:1604.02878",
"arxiv:2104.00298",
"arxiv:2008.06456",
"arxiv:1901.08971",
"region:us"
] | null | 2025-08-19T17:36:23Z |
# DeepSecure-AI
DeepSecure-AI is a powerful open-source tool designed to detect fake images, videos, and audios. Utilizing state-of-the-art deep learning techniques like EfficientNetV2 and MTCNN, DeepSecure-AI offers frame-by-frame video analysis, enabling high-accuracy deepfake detection. It's developed with a focus on ease of use, making it accessible for researchers, developers, and security analysts...
---
## Features
- Multimedia Detection: Detect deepfakes in images, videos, and audio files using a unified platform.
- High Accuracy: Leverages EfficientNetV2 for enhanced prediction performance and accurate results.
- Real-Time Video Analysis: Frame-by-frame analysis of videos with automatic face detection.
- User-Friendly Interface: Easy-to-use interface built with Gradio for uploading and processing media files.
- Open Source: Completely open source under the MIT license, making it available for developers to extend and improve.
---
## Demo-Data
You can test the deepfake detection capabilities of DeepSecure-AI by uploading your video files. The tool will analyze each frame of the video, detect faces, and determine the likelihood of the video being real or fake.
Examples:
1. [Video1-fake-1-ff.mp4](#)
2. [Video6-real-1-ff.mp4](#)
---
## How It Works
DeepSecure-AI uses the following architecture:
1. Face Detection:
The [MTCNN](https://arxiv.org/abs/1604.02878) model detects faces in each frame of the video. If no face is detected, it will use the previous frame's face to ensure accuracy.
2. Fake vs. Real Classification:
Once the face is detected, it's resized and fed into the [EfficientNetV2](https://arxiv.org/abs/2104.00298) deep learning model, which determines the likelihood of the frame being real or fake.
3. Fake Confidence:
A final prediction is generated as a percentage score, indicating the confidence that the media is fake.
4. Results:
DeepSecure-AI provides an output video, highlighting the detected faces and a summary of whether the input is classified as real or fake.
---
## Project Setup
### Prerequisites
Ensure you have the following installed:
- Python 3.10
- Gradio (pip install gradio)
- TensorFlow (pip install tensorflow)
- OpenCV (pip install opencv-python)
- PyTorch (pip install torch torchvision torchaudio)
- facenet-pytorch (pip install facenet-pytorch)
- MoviePy (pip install moviepy)
### Installation
1. Clone the repository:
cd DeepSecure-AI
2. Install required dependencies:
pip install -r requirements.txt
3. Download the pre-trained model weights for EfficientNetV2 and place them in the project folder.
### Running the Application
1. Launch the Gradio interface:
python app.py
2. The web interface will be available locally. You can upload a video, and DeepSecure-AI will analyze and display results.
---
## Example Usage
Upload a video or image to DeepSecure-AI to detect fake media. Here are some sample predictions:
- Video Analysis: The tool will detect faces from each frame and classify whether the video is fake or real.
- Result Output: A GIF or MP4 file with the sequence of detected faces and classification result will be provided.
---
## Technologies Used
- TensorFlow: For building and training deep learning models.
- EfficientNetV2: The core model for image and video classification.
- MTCNN: For face detection in images and videos.
- OpenCV: For video processing and frame manipulation.
- MoviePy: For video editing and result generation.
- Gradio: To create a user-friendly interface for interacting with the deepfake detector.
---
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
## Contributions
Contributions are welcome! If you'd like to improve the tool, feel free to submit a pull request or raise an issue.
For more information, check the [Contribution Guidelines](CONTRIBUTING.md).
---
## References
- Li et al. (2020): [Celeb-DF(V2)](https://arxiv.org/abs/2008.06456)
- Rossler et al. (2019): [FaceForensics++](https://arxiv.org/abs/1901.08971)
- Timesler (2020): [Facial Recognition Model in PyTorch](https://www.kaggle.com/timesler/facial-recognition-model-in-pytorch)
---
### Disclaimer
DeepSecure-AI is a research project and is designed for educational purposes.Please use responsibly and always give proper credit when utilizing the model in your work.
|
Buura/qwen-coder-1.5b-opencodeinstruct-grpo
|
Buura
| 2025-08-19T17:41:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:40:41Z |
---
base_model: unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Buura
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Huseyin/teknofest-2025-turkish-edu
|
Huseyin
| 2025-08-19T17:38:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:32:35Z |
# 🚀 TEKNOFEST 2025 - Türkçe Eğitim Modeli
Bu model **TEKNOFEST 2025 Eylem Temelli Türkçe Büyük Dil Modeli Yarışması** için geliştirilmiştir.
## 📋 Model Bilgileri
- **Base Model:** Qwen/Qwen3-8B
- **Fine-tuning:** LoRA Adapter (Huseyin/qwen3-8b-turkish-teknofest2025-private)
- **Oluşturma Tarihi:** 2025-08-19 17:37
- **Alan:** Eğitim Teknolojileri
- **Dil:** Türkçe
## 🎯 Kullanım Alanları
- Türkçe eğitim materyali oluşturma
- Öğrenci seviyesine uygun içerik üretimi
- Soru-cevap sistemleri
- Eğitsel içerik özetleme
- Ders planı hazırlama
## 💻 Kullanım
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model ve tokenizer'ı yükle
model_name = "Huseyin/teknofest-2025-turkish-edu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
# Örnek kullanım
prompt = "Türkçe eğitimi için yaratıcı bir etkinlik önerisi:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 🏆 TEKNOFEST 2025
Bu model, TEKNOFEST 2025 Türkçe Büyük Dil Modeli Yarışması kapsamında geliştirilmiştir.
### Yarışma Kategorisi
**Eylem Temelli Türkçe Büyük Dil Modeli**
### Ekip
TEKNOFEST 2025 Yarışma Ekibi
## 📊 Performans Metrikleri
- **Perplexity:** [Değerlendirilecek]
- **BLEU Score:** [Değerlendirilecek]
- **Human Evaluation:** [Değerlendirilecek]
## 📄 Lisans
Apache 2.0
## 🙏 Teşekkür
Bu modelin geliştirilmesinde emeği geçen herkese teşekkür ederiz.
---
*TEKNOFEST 2025 - Türkiye'nin Teknoloji Festivali*
|
catme0w/MolScribe-Long
|
catme0w
| 2025-08-19T17:37:36Z | 0 | 0 | null |
[
"base_model:yujieq/MolScribe",
"base_model:finetune:yujieq/MolScribe",
"license:mit",
"region:us"
] | null | 2025-08-19T04:44:52Z |
---
license: mit
base_model:
- yujieq/MolScribe
---
|
WenFengg/21_14l10__20_8
|
WenFengg
| 2025-08-19T17:37:09Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T17:35:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755624890
|
Dejiat
| 2025-08-19T17:35:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:35:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755623414
|
aleebaster
| 2025-08-19T17:34:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:34:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_norwegian_immigration3
|
AnonymousCS
| 2025-08-19T17:31:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T17:27:36Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_norwegian_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_norwegian_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3159
- Accuracy: 0.9091
- 1-f1: 0.8571
- 1-recall: 0.8182
- 1-precision: 0.9
- Balanced Acc: 0.8864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1747 | 1.0 | 2 | 0.3069 | 0.9091 | 0.8571 | 0.8182 | 0.9 | 0.8864 |
| 0.2707 | 2.0 | 4 | 0.2797 | 0.9091 | 0.8571 | 0.8182 | 0.9 | 0.8864 |
| 0.1527 | 3.0 | 6 | 0.2621 | 0.8788 | 0.8 | 0.7273 | 0.8889 | 0.8409 |
| 0.1886 | 4.0 | 8 | 0.2600 | 0.8788 | 0.8 | 0.7273 | 0.8889 | 0.8409 |
| 0.1 | 5.0 | 10 | 0.2818 | 0.9091 | 0.8571 | 0.8182 | 0.9 | 0.8864 |
| 0.0835 | 6.0 | 12 | 0.3159 | 0.9091 | 0.8571 | 0.8182 | 0.9 | 0.8864 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755623074
|
indoempatnol
| 2025-08-19T17:31:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:31:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
g-assismoraes/Qwen3-4B-Base-aki-alpha0.08-var-adown0.05-qQ2Q3-hatebr
|
g-assismoraes
| 2025-08-19T17:29:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:22:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yanmife/gemma-2b-health-qlora
|
Yanmife
| 2025-08-19T17:27:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:30:05Z |
---
base_model: google/gemma-2b
library_name: transformers
model_name: gemma-2b-health-qlora
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2b-health-qlora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Yanmife/gemma-2b-health-qlora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/emmy_wan-personal/Fine-tuning-Gemma-2B-health/runs/bguryohi)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755624278
|
Vasya777
| 2025-08-19T17:25:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755624226
|
lilTAT
| 2025-08-19T17:24:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:24:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Pradeepgupta112233/runwayml-stable-diffusion-v1-5
|
Pradeepgupta112233
| 2025-08-19T17:24:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"region:us"
] |
text-to-image
| 2025-08-19T17:20:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/8043.jpg
text: '-'
base_model: ''
instance_prompt: SD 1.5
---
# My Project – Character LoRA
<Gallery />
## Model description
my_project
## Trigger words
You should use `SD 1.5` to trigger the image generation.
## Download model
[Download](/Pradeepgupta112233/runwayml-stable-diffusion-v1-5/tree/main) them in the Files & versions tab.
|
AnonymousCS/xlmr_finnish_immigration3
|
AnonymousCS
| 2025-08-19T17:23:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T17:19:59Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_finnish_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_finnish_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Accuracy: 0.9846
- 1-f1: 0.9767
- 1-recall: 0.9767
- 1-precision: 0.9767
- Balanced Acc: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2027 | 1.0 | 5 | 0.0699 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
| 0.1827 | 2.0 | 10 | 0.0772 | 0.9769 | 0.9655 | 0.9767 | 0.9545 | 0.9769 |
| 0.0918 | 3.0 | 15 | 0.0637 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
| 0.067 | 4.0 | 20 | 0.0844 | 0.9692 | 0.9545 | 0.9767 | 0.9333 | 0.9711 |
| 0.0457 | 5.0 | 25 | 0.0700 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AppliedLucent/nemo-phase5
|
AppliedLucent
| 2025-08-19T17:23:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:AppliedLucent/nemo-phase4",
"base_model:finetune:AppliedLucent/nemo-phase4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:10:38Z |
---
base_model: AppliedLucent/nemo-phase4
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AppliedLucent
- **License:** apache-2.0
- **Finetuned from model :** AppliedLucent/nemo-phase4
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755622535
|
quantumxnode
| 2025-08-19T17:23:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:23:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755622654
|
pempekmangedd
| 2025-08-19T17:23:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:22:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/21_14l8__20_8
|
WenFengg
| 2025-08-19T17:22:15Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T17:20:29Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755623981
|
Dejiat
| 2025-08-19T17:20:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:20:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755622293
|
unitova
| 2025-08-19T17:19:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:19:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755622613
|
Sayemahsjn
| 2025-08-19T17:18:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:18:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755623819
|
lilTAT
| 2025-08-19T17:17:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:17:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jacksss123/net72_uid253
|
Jacksss123
| 2025-08-19T17:17:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T17:13:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
debesu/Mati-Bal-Mati-Mist
|
debesu
| 2025-08-19T17:16:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T16:47:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Mati
---
# Mati Bal Mati Mist
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Mati` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Mati",
"lora_weights": "https://huggingface.co/debesu/Mati-Bal-Mati-Mist/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('debesu/Mati-Bal-Mati-Mist', weight_name='lora.safetensors')
image = pipeline('Mati').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1400
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/debesu/Mati-Bal-Mati-Mist/discussions) to add images that show off what you’ve made with this LoRA.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.