modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-15 12:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 557
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-15 12:32:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
esi777/blockassist-bc-camouflaged_trotting_eel_1755729414
|
esi777
| 2025-08-20T22:37:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:37:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arianaazarbal/standard_tpr_0.8-20250820_164950-policy-adapter
|
arianaazarbal
| 2025-08-20T22:36:28Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-20T22:35:29Z |
# Policy Model LoRA Adapter (GRPO/DPO)
Experiment: standard_tpr_0.8
Timestamp: 20250820_164950
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Policy Model LoRA Adapter (GRPO/DPO)
- **Experiment Name**: standard_tpr_0.8
- **Training Timestamp**: 20250820_164950
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755727867
|
lisaozill03
| 2025-08-20T22:35:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:35:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arianaazarbal/standard_tpr_0.8-20250820_164950-rm-adapter
|
arianaazarbal
| 2025-08-20T22:35:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T22:34:56Z |
# Reward Model LoRA Adapter
Experiment: standard_tpr_0.8
Timestamp: 20250820_164950
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Reward Model LoRA Adapter
- **Experiment Name**: standard_tpr_0.8
- **Training Timestamp**: 20250820_164950
|
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755729195
|
Leoar
| 2025-08-20T22:35:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy toothy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:35:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy toothy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755727554
|
coelacanthxyz
| 2025-08-20T22:33:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:33:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FatimahEmadEldin/Isnad-AI-Identifying-Islamic-Citation
|
FatimahEmadEldin
| 2025-08-20T22:32:06Z | 0 | 0 | null |
[
"safetensors",
"bert",
"ar",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T22:12:50Z |
---
license: apache-2.0
language:
- ar
metrics:
- f1
base_model:
- aubmindlab/bert-base-arabertv02
---
# Isnad AI: AraBERT for Ayah & Hadith Span Detection in LLM Outputs
<p align="center">
<img src="https://placehold.co/800x200/dbeafe/3b82f6?text=Isnad+AI+-+Islamic+Citation+Detection" alt="Isnad AI - Islamic Citation Detection">
</p>
This repository contains the official fine-tuned model for the **Isnad AI** system, the submission to the **[IslamicEval 2025 Shared Task 1A](https://sites.google.com/view/islamiceval-2025)**. The model is designed to identify character-level spans of Quranic verses (Ayahs) and Prophetic sayings (Hadiths) within text generated by Large Language Models (LLMs).
#### By: [Fatimah Emad Eldin](https://scholar.google.com/citations?user=CfX6eA8AAAAJ&hl=ar)
#### *Cairo University*
[](https://www.codabench.org/competitions/9820/)
[](https://github.com/astral-fate/IslamicEval)
[](https://huggingface.co/collections/FatimahEmadEldin/)
[](https://github.com/astral-fate/IslamicEval/blob/main/LICENSE)
---
## ๐ Model Description
This model fine-tunes **AraBERTv2** (`aubmindlab/bert-base-arabertv2`) on a specialized token classification task. Its purpose is to label tokens within a given Arabic text according to the BIO schema:
* `B-Ayah` (Beginning of a Quranic verse)
* `I-Ayah` (Inside a Quranic verse)
* `B-Hadith` (Beginning of a Prophetic saying)
* `I-Hadith` (Inside a Prophetic saying)
* `O` (Outside of any religious citation)
The key innovation behind this model is a **novel rule-based data generation pipeline** that programmatically creates a large-scale, high-quality training corpus from authentic religious texts, completely eliminating the need for manual annotation. This method proved highly effective, enabling the model to learn the contextual patterns of how LLMs cite Islamic sources.
---
## ๐ How to Use
You can easily use this model with the `transformers` library pipeline for `token-classification` (or `ner`). For best results, use `aggregation_strategy="simple"` to group token pieces into coherent entities.
```python
from transformers import pipeline
# Load the token classification pipeline
model_id = "FatimahEmadEldin/Isnad-AI-Identifying-Islamic-Citation"
islamic_ner = pipeline(
"token-classification",
model=model_id,
aggregation_strategy="simple"
)
# Example text from an LLM response
text = "ููุถุญ ููุง ุงูุฏูู ุฃูู
ูุฉ ุงูุตุฏูุ ููู ุงูุญุฏูุซ ุงูุดุฑูู ูุฌุฏ ุฃู ุงููุจู ูุงู: ุนูููู
ุจุงูุตุฏู. ูู
ุง ุฃูุฒู ุงููู ูู ูุชุงุจู ุงููุฑูู
: ูุง ุฃููุง ุงูุฐูู ุขู
ููุง ุงุชููุง ุงููู ูููููุง ู
ุน ุงูุตุงุฏููู."
# Get the identified spans
results = islamic_ner(text)
# Print the results
for entity in results:
print(f"Entity: {entity['word']}")
print(f"Label: {entity['entity_group']}")
print(f"Score: {entity['score']:.4f}\n")
# Expected output:
# Entity: ุนูููู
ุจุงูุตุฏู
# Label: Hadith
# Score: 0.9876
# Entity: ูุง ุฃููุง ุงูุฐูู ุขู
ููุง ุงุชููุง ุงููู ูููููุง ู
ุน ุงูุตุงุฏููู
# Label: Ayah
# Score: 0.9912
````
-----
## โ๏ธ Training Procedure
### Data Generation
The model was trained exclusively on a synthetically generated dataset to overcome the lack of manually annotated data for this specific task. The pipeline involved several stages:
1. **Data Sourcing**: Authentic texts were sourced from `quran.json` (containing all Quranic verses) and a JSON file of the Six Major Hadith Collections.
2. **Text Preprocessing**: Long Ayahs were split into smaller segments to prevent sequence truncation, and data was augmented by creating versions with and without Arabic diacritics (Tashkeel).
3. **Template-Based Generation**: Each religious text was embedded into realistic contextual templates using a curated list of common prefixes (e.g., "ูุงู ุงููู ุชุนุงูู:") and suffixes (e.g., "ุตุฏู ุงููู ุงูุนุธูู
"). Noise was also injected by adding neutral sentences to better simulate LLM outputs.
### Fine-Tuning
The `aubmindlab/bert-base-arabertv2` model was fine-tuned with the following key hyperparameters:
* **Learning Rate**: `2e-5`
* **Epochs**: 10 (with early stopping patience of 3)
* **Effective Batch Size**: 16
* **Optimizer**: AdamW
* **Mixed Precision**: fp16 enabled
-----
## ๐ Evaluation Results
The model was evaluated using the official character-level Macro F1-Score metric for the IslamicEval 2025 shared task.
### Official Test Set Results
The system achieved a **final F1-score of 66.97%** on the blind test set, demonstrating the effectiveness of the rule-based data generation approach.
| Methodology | Test F1 Score |
| :--- | :---: |
| **Isnad AI (Rule-Based Model)** | **66.97%** |
| Generative Data (Ablation) | 50.50% |
| Database Lookup (Ablation) | 34.80% |
### ๐ Highlight: Development Set Performance
A detailed evaluation on the manually annotated development set provided by the organizers shows a strong and balanced performance.
**Final Macro F1-Score on Dev Set: 65.08%**
#### Per-Class Performance (Character-Level)
| Class | Precision | Recall | F1-Score |
|:---|:---:|:---:|:---:|
| ๐ข **Neither** | 0.8423 | 0.9688 | 0.9011 |
| ๐ต **Ayah** | 0.8326 | 0.5574 | 0.6678 |
| ๐ก **Hadith** | 0.4750 | 0.3333 | 0.3917 |
| **Overall** | **0.7166** | **0.6198** | **0.6535** |
*(These results are from the official `scoring.py` script run on the development set).*
-----
## โ ๏ธ Limitations and Bias
* **Performance on Hadith**: The model's primary challenge is identifying Hadith texts, which have significantly more linguistic and structural variety than Quranic verses. The F1-score for the `Hadith` class is lower than for `Ayah`, indicating it may miss or misclassify some prophetic sayings.
* **Template Dependency**: The model's knowledge is based on the rule-based templates used for training. It may be less effective at identifying citations that appear in highly novel or unconventional contexts not represented in the training data.
* **Scope**: This model identifies **intended** citations, as per the shared task rules. It does **not** verify the authenticity or correctness of the citation itself. An LLM could generate a completely fabricated verse, and this model would still identify it if it is presented like a real one.
-----
## โ๏ธ Citation
If you use this model or the methodology in your research, please cite the paper:
```bibtex
Coming soon
```
```
```
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755728923
|
esi777
| 2025-08-20T22:29:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:29:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755727278
|
mang3dd
| 2025-08-20T22:27:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:27:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
java22dev/llama3-lora-turkish-F16-GGUF
|
java22dev
| 2025-08-20T22:25:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-lora",
"tr",
"base_model:Yudum/llama3-lora-turkish",
"base_model:quantized:Yudum/llama3-lora-turkish",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T22:25:40Z |
---
base_model: Yudum/llama3-lora-turkish
language:
- tr
library_name: transformers
tags:
- unsloth
- llama-cpp
- gguf-my-lora
---
# java22dev/llama3-lora-turkish-F16-GGUF
This LoRA adapter was converted to GGUF format from [`Yudum/llama3-lora-turkish`](https://huggingface.co/Yudum/llama3-lora-turkish) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Yudum/llama3-lora-turkish) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora llama3-lora-turkish-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora llama3-lora-turkish-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Muapi/wet-plate-collodion
|
Muapi
| 2025-08-20T22:23:12Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:23:01Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wet Plate Collodion

**Base model**: Flux.1 D
**Trained words**: wetplatecollodion, border, swirls, washed out, textured, smears, scratches, folds
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:101842@1210076", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
roeker/blockassist-bc-quick_wiry_owl_1755728493
|
roeker
| 2025-08-20T22:22:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:22:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/buns-magic-the-gathering-loras-flux-dev-pony-mtg
|
Muapi
| 2025-08-20T22:22:51Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:22:10Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Buns' Magic The Gathering LoRAs [Flux Dev] [Pony] [MtG]

**Base model**: Flux.1 D
**Trained words**: m4th3g4
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:598734@854505", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
backt/nasdxlv100
|
backt
| 2025-08-20T22:22:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T22:12:23Z |
---
license: apache-2.0
---
|
lautan/blockassist-bc-gentle_patterned_goat_1755726959
|
lautan
| 2025-08-20T22:21:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:21:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo27_0
|
AnonymousCS
| 2025-08-20T22:20:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T22:16:32Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo27_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo27_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2796
- Accuracy: 0.9036
- 1-f1: 0.8593
- 1-recall: 0.8842
- 1-precision: 0.8358
- Balanced Acc: 0.8987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6197 | 1.0 | 25 | 0.6021 | 0.6671 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.2403 | 2.0 | 50 | 0.2640 | 0.9113 | 0.8634 | 0.8417 | 0.8862 | 0.8939 |
| 0.2432 | 3.0 | 75 | 0.2184 | 0.9152 | 0.8685 | 0.8417 | 0.8971 | 0.8968 |
| 0.3089 | 4.0 | 100 | 0.2378 | 0.9100 | 0.8638 | 0.8571 | 0.8706 | 0.8968 |
| 0.2386 | 5.0 | 125 | 0.2796 | 0.9036 | 0.8593 | 0.8842 | 0.8358 | 0.8987 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755726656
|
katanyasekolah
| 2025-08-20T22:20:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:20:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
busyyy/blockassist-bc-bipedal_deadly_dinosaur_1755726597
|
busyyy
| 2025-08-20T22:19:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal deadly dinosaur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:18:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal deadly dinosaur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1755728313
|
8septiadi8
| 2025-08-20T22:19:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:19:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/rough-water-colors
|
Muapi
| 2025-08-20T22:19:17Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:18:59Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Rough Water Colors

**Base model**: Flux.1 D
**Trained words**:
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1457421@1648000", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755726734
|
rvipitkirubbe
| 2025-08-20T22:17:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:17:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/butterfly-lighting-style-from-above-xl-f1d
|
Muapi
| 2025-08-20T22:16:45Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:16:33Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Butterfly Lighting style (from above) XL + F1D

**Base model**: Flux.1 D
**Trained words**: cinematic , overhead , light from above, light, photographic
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:381960@1374100", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/super-panavision-70-cinematic-vintage-film-style-xl-f1d
|
Muapi
| 2025-08-20T22:15:57Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:15:35Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Super Panavision 70 Cinematic Vintage Film style XL + F1D

**Base model**: Flux.1 D
**Trained words**: In Super Panavision 70 Technicolor Film style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:812212@931164", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
sekirr22/blockassist-bc-furry_rugged_camel_1755727776
|
sekirr22
| 2025-08-20T22:15:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry rugged camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:15:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry rugged camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755726582
|
ihsanridzi
| 2025-08-20T22:15:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:15:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1755727961
|
MajorJalud
| 2025-08-20T22:14:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:14:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/synthesia
|
Muapi
| 2025-08-20T22:14:15Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:13:56Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Synthesia

**Base model**: Flux.1 D
**Trained words**: synthesia
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1195597@1346178", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1755727948
|
8septiadi8
| 2025-08-20T22:13:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:13:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abcorrea/p3-v2
|
abcorrea
| 2025-08-20T22:13:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:abcorrea/p3-v1",
"base_model:finetune:abcorrea/p3-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T22:05:10Z |
---
base_model: abcorrea/p3-v1
library_name: transformers
model_name: p3-v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for p3-v2
This model is a fine-tuned version of [abcorrea/p3-v1](https://huggingface.co/abcorrea/p3-v1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="abcorrea/p3-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/ethereal
|
Muapi
| 2025-08-20T22:12:53Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:12:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ethereal

**Base model**: Flux.1 D
**Trained words**:
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1016450@1139626", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/sunstone-style-illustrious-flux
|
Muapi
| 2025-08-20T22:11:04Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:10:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Sunstone Style [Illustrious/Flux]

**Base model**: Flux.1 D
**Trained words**: sunst0n3
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:948991@1062481", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/teknofest-2025-turkish-edu-v2-GGUF
|
mradermacher
| 2025-08-20T22:08:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"turkish",
"education",
"teknofest-2025",
"qwen",
"text-generation",
"lora",
"tr",
"dataset:Huseyin/final2",
"base_model:Huseyin/teknofest-2025-turkish-edu-v2",
"base_model:adapter:Huseyin/teknofest-2025-turkish-edu-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-20T17:36:48Z |
---
base_model: Huseyin/teknofest-2025-turkish-edu-v2
datasets:
- Huseyin/final2
language: tr
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- turkish
- education
- teknofest-2025
- qwen
- text-generation
- lora
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Huseyin/teknofest-2025-turkish-edu-v2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#teknofest-2025-turkish-edu-v2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/teknofest-2025-turkish-edu-v2-GGUF/resolve/main/teknofest-2025-turkish-edu-v2.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
marcovise/TextEmbedding3SmallSentimentHead
|
marcovise
| 2025-08-20T22:08:34Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sentiment-head",
"feature-extraction",
"sentiment-analysis",
"text-classification",
"openai-embeddings",
"custom_code",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-20T21:50:00Z |
---
license: mit
tags:
- sentiment-analysis
- text-classification
- openai-embeddings
- pytorch
pipeline_tag: text-classification
library_name: transformers
---
# TextEmbedding3SmallSentimentHead
In case you needed a sentiment analysis classifier on top of embeddings from OpenAI embeddings model.
## Model Description
- **What this is**: A compact PyTorch classifier head trained on top of `text-embedding-3-small` (1536-dim) to predict sentiment: negative, neutral, positive.
- **Data**: Preprocessed from the [Kaggle Sentiment Analysis Dataset](https://www.kaggle.com/datasets/abhi8923shriv/sentiment-analysis-dataset).
- **Metrics (val)**: **F1 macro โ 0.89**, **Accuracy โ 0.89** on a held-out validation split.
- **Architecture**: Simple MLP head (256 hidden units, dropout 0.2), trained for 5 epochs with Adam.
## Input/Output
- **Input**: Float32 tensor of shape `[batch, 1536]` (OpenAI text-embedding-3-small embeddings).
- **Output**: Logits over 3 classes. Argmax โ {0: negative, 1: neutral, 2: positive}.
## Usage
```python
from transformers import AutoModel
import torch
# Load model
model = AutoModel.from_pretrained(
"marcovise/TextEmbedding3SmallSentimentHead",
trust_remote_code=True
).eval()
# Your 1536-dim OpenAI embeddings
embeddings = torch.randn(4, 1536) # batch of 4 examples
# Predict sentiment
with torch.no_grad():
logits = model(inputs_embeds=embeddings)["logits"] # [batch, 3]
predictions = logits.argmax(dim=1) # [batch]
# 0=negative, 1=neutral, 2=positive
print(predictions) # tensor([1, 0, 2, 1])
```
## Training Details
- **Training data**: Kaggle Sentiment Analysis Dataset
- **Preprocessing**: Text โ OpenAI embeddings โ 3-class labels {negative: 0.0, neutral: 0.5, positive: 1.0}
- **Architecture**: 1536 โ 256 โ ReLU โ Dropout(0.2) โ 3 classes
- **Optimizer**: Adam (lr=1e-3, weight_decay=1e-4)
- **Loss**: CrossEntropyLoss with label smoothing (0.05)
- **Epochs**: 5
## Intended Use
- Quick, lightweight sentiment classification for short text once embeddings are available.
- Works well for general sentiment analysis tasks similar to the training distribution.
## Limitations
- Trained on a specific sentiment dataset; may have domain bias.
- Requires OpenAI text-embedding-3-small embeddings as input.
- Not safety-critical; evaluate before production use.
- May reflect biases present in the training data.
## License
MIT
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755727659
|
esi777
| 2025-08-20T22:08:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:08:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/flux-engrave-lora
|
Muapi
| 2025-08-20T22:08:08Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:07:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Flux Engrave LoRA

**Base model**: Flux.1 D
**Trained words**: NGRVNG
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1048150@1176040", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
rbelanec/train_svamp_1755694510
|
rbelanec
| 2025-08-20T22:07:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-20T22:01:59Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_svamp_1755694510
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1755694510
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1778
- Num Input Tokens Seen: 676320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 0.5526 | 0.5016 | 158 | 0.7046 | 34176 |
| 0.2434 | 1.0032 | 316 | 0.2998 | 67872 |
| 0.0913 | 1.5048 | 474 | 0.1424 | 101696 |
| 0.0227 | 2.0063 | 632 | 0.1410 | 135776 |
| 0.0576 | 2.5079 | 790 | 0.1447 | 169712 |
| 0.0193 | 3.0095 | 948 | 0.1086 | 203712 |
| 0.1033 | 3.5111 | 1106 | 0.1210 | 237664 |
| 0.0019 | 4.0127 | 1264 | 0.1067 | 271472 |
| 0.079 | 4.5143 | 1422 | 0.1393 | 305088 |
| 0.0025 | 5.0159 | 1580 | 0.1451 | 339264 |
| 0.0008 | 5.5175 | 1738 | 0.1677 | 373488 |
| 0.0053 | 6.0190 | 1896 | 0.1908 | 407264 |
| 0.0004 | 6.5206 | 2054 | 0.1609 | 441200 |
| 0.0001 | 7.0222 | 2212 | 0.1493 | 475008 |
| 0.0001 | 7.5238 | 2370 | 0.1729 | 508832 |
| 0.0001 | 8.0254 | 2528 | 0.1765 | 542720 |
| 0.0 | 8.5270 | 2686 | 0.1798 | 576512 |
| 0.0 | 9.0286 | 2844 | 0.1791 | 610688 |
| 0.0 | 9.5302 | 3002 | 0.1781 | 644848 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
roeker/blockassist-bc-quick_wiry_owl_1755727573
|
roeker
| 2025-08-20T22:07:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:06:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/tilt-shift-photography-style-xl-f1d
|
Muapi
| 2025-08-20T22:07:20Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:06:47Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Tilt shift photography style XL + F1D

**Base model**: Flux.1 D
**Trained words**: Tiltโshift photography style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:541692@1105979", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1755727562
|
8septiadi8
| 2025-08-20T22:07:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:06:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NoemaResearch/Nous-1-2B
|
NoemaResearch
| 2025-08-20T22:05:25Z | 353 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-18T03:04:38Z |
---
base_model:
- Qwen/Qwen3-1.7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: other
license_name: anvdl-1.0
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
---

# Nous-V1 8B
## Overview
**Nous-V1 2B** is a cutting-edge 8 billion parameter language model developed by Apexion AI, based on the architecture of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B). Designed for versatility across diverse NLP tasks, Nous-V1 4B delivers strong performance in conversational AI, knowledge reasoning, code generation, and content creation.
**Key Features:**
- **โก Efficient 2B Parameter Scale:** Balances model capability with practical deployment on modern hardware
- **๐ง Enhanced Contextual Understanding:** Supports an 128k token context window, enabling complex multi-turn conversations and document analysis
- **๐ Multilingual & Multi-domain:** Trained on a diverse dataset for broad language and domain coverage
- **๐ค Instruction-Following & Adaptability:** Fine-tuned to respond accurately and adaptively across tasks
- **๐ Optimized Inference:** Suitable for GPU environments such as NVIDIA A100, T4, and P100 for low-latency applications
---
## Why Choose Nous-V1 2B?
While larger models can offer more raw power, Nous-V1 2B strikes a practical balance โ optimized for deployment efficiency without significant compromise on language understanding or generation quality. Itโs ideal for applications requiring:
- Real-time conversational agents
- Code completion and programming assistance
- Content generation and summarization
- Multilingual natural language understanding
---
## ๐ฅ๏ธ How to Run Locally
You can easily integrate Nous-V1 2B via the Hugging Face Transformers library or deploy it on popular serving platforms.
### Using Hugging Face Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "apexion-ai/Nous-1-2B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
### Deployment Options
- Compatible with [vLLM](https://github.com/vllm-project/vllm) for efficient serving
- Works with [llama.cpp](https://github.com/ggerganov/llama.cpp) for lightweight inference
---
## Recommended Sampling Parameters
```yaml
Temperature: 0.7
Top-p: 0.9
Top-k: 40
Min-p: 0.0
```
---
## FAQ
- **Q:** Can I fine-tune Nous-V1 2B on my custom data?
**A:** Yes, the model supports fine-tuning workflows via Hugging Face Trainer or custom scripts.
- **Q:** What hardware is recommended?
**A:** NVIDIA GPUs with at least 16GB VRAM (e.g., A100, 3090) are optimal for inference and fine-tuning.
- **Q:** Is the model safe to use for production?
**A:** Nous-V1 2B includes safety mitigations but should be used with human oversight and proper filtering for sensitive content.
---
## ๐ Citation
```bibtex
@misc{apexion2025nousv14b,
title={Nous-V1 2B: Efficient Large Language Model for Versatile NLP Applications},
author={Apexion AI Team},
year={2025},
url={https://huggingface.co/apexion-ai/Nous-V1-2B}
}
```
---
*Nous-V1 2B โ Powering practical AI applications with intelligent language understanding.*
|
NoemaResearch/Nous-1-4B
|
NoemaResearch
| 2025-08-20T22:05:02Z | 96 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-17T05:12:08Z |
---
base_model:
- Qwen/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: other
license_name: anvdl-1.0
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
---

# Nous-V1 4B
## Overview
**Nous-V1 4B** is a cutting-edge 4 billion parameter language model developed by Apexion AI, based on the architecture of [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). Designed for versatility across diverse NLP tasks, Nous-V1 4B delivers strong performance in conversational AI, knowledge reasoning, code generation, and content creation.
**Key Features:**
- **โก Efficient 4B Parameter Scale:** Balances model capability with practical deployment on modern hardware
- **๐ง Enhanced Contextual Understanding:** Supports an 128k token context window, enabling complex multi-turn conversations and document analysis
- **๐ Multilingual & Multi-domain:** Trained on a diverse dataset for broad language and domain coverage
- **๐ค Instruction-Following & Adaptability:** Fine-tuned to respond accurately and adaptively across tasks
- **๐ Optimized Inference:** Suitable for GPU environments such as NVIDIA A100, T4, and P100 for low-latency applications
---
## Why Choose Nous-V1 4B?
While larger models can offer more raw power, Nous-V1 4B strikes a practical balance โ optimized for deployment efficiency without significant compromise on language understanding or generation quality. Itโs ideal for applications requiring:
- Real-time conversational agents
- Code completion and programming assistance
- Content generation and summarization
- Multilingual natural language understanding
---
## ๐ฅ๏ธ How to Run Locally
You can easily integrate Nous-V1 4B via the Hugging Face Transformers library or deploy it on popular serving platforms.
### Using Hugging Face Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "apexion-ai/Nous-1-4B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
### Deployment Options
- Compatible with [vLLM](https://github.com/vllm-project/vllm) for efficient serving
- Works with [llama.cpp](https://github.com/ggerganov/llama.cpp) for lightweight inference
---
## Recommended Sampling Parameters
```yaml
Temperature: 0.7
Top-p: 0.9
Top-k: 40
Min-p: 0.0
```
---
## FAQ
- **Q:** Can I fine-tune Nous-V1 4B on my custom data?
**A:** Yes, the model supports fine-tuning workflows via Hugging Face Trainer or custom scripts.
- **Q:** What hardware is recommended?
**A:** NVIDIA GPUs with at least 16GB VRAM (e.g., A100, 3090) are optimal for inference and fine-tuning.
- **Q:** Is the model safe to use for production?
**A:** Nous-V1 4B includes safety mitigations but should be used with human oversight and proper filtering for sensitive content.
---
## ๐ Citation
```bibtex
@misc{apexion2025nousv14b,
title={Nous-V1 4B: Efficient Large Language Model for Versatile NLP Applications},
author={Apexion AI Team},
year={2025},
url={https://huggingface.co/apexion-ai/Nous-V1-4B}
}
```
---
*Nous-V1 4B โ Powering practical AI applications with intelligent language understanding.*
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755725854
|
helmutsukocok
| 2025-08-20T22:04:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:04:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755725959
|
koloni
| 2025-08-20T22:04:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:04:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755725736
|
calegpedia
| 2025-08-20T22:03:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:02:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755727300
|
lilTAT
| 2025-08-20T22:02:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:02:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/randommaxx-gothic-niji
|
Muapi
| 2025-08-20T22:02:13Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:01:57Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# RandomMaxx Gothic Niji

**Base model**: Flux.1 D
**Trained words**: niji, gothic, erotic, anime
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1349907@1524754", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
xiaoabcd/Llama-3.1-8B-bnb-4bit-qz
|
xiaoabcd
| 2025-08-20T22:00:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T21:59:30Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xiaoabcd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755725502
|
manusiaperahu2012
| 2025-08-20T22:00:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T22:00:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo26_1
|
AnonymousCS
| 2025-08-20T22:00:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T21:57:43Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo26_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo26_1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.9344
- 1-f1: 0.9006
- 1-recall: 0.8919
- 1-precision: 0.9094
- Balanced Acc: 0.9238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2591 | 1.0 | 25 | 0.1775 | 0.9409 | 0.9073 | 0.8687 | 0.9494 | 0.9228 |
| 0.2407 | 2.0 | 50 | 0.1862 | 0.9280 | 0.8862 | 0.8417 | 0.9356 | 0.9064 |
| 0.1988 | 3.0 | 75 | 0.2140 | 0.9344 | 0.9006 | 0.8919 | 0.9094 | 0.9238 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755725576
|
hakimjustbao
| 2025-08-20T21:59:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:59:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1755727015
|
MajorJalud
| 2025-08-20T21:59:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:59:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Datarus-R1-14B-preview-GGUF
|
mradermacher
| 2025-08-20T21:58:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:DatarusAI/Datarus-R1-14B-preview",
"base_model:quantized:DatarusAI/Datarus-R1-14B-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T17:02:21Z |
---
base_model: DatarusAI/Datarus-R1-14B-preview
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DatarusAI/Datarus-R1-14B-preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Datarus-R1-14B-preview-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Datarus-R1-14B-preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Datarus-R1-14B-preview-GGUF/resolve/main/Datarus-R1-14B-preview.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GlebaRR/Affine-5Gn2tDkkhTbPAAgzzSt7KhZHMdBEhGeS4tiWDAJ6utfsoFwr
|
GlebaRR
| 2025-08-20T21:58:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T21:56:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755727000
|
lilTAT
| 2025-08-20T21:57:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:57:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/windtunnel-face
|
Muapi
| 2025-08-20T21:57:07Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:56:14Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Windtunnel Face

**Base model**: Flux.1 D
**Trained words**: w1nd8l0wn photo
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1129300@1269460", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
plesniar/zangskari-ipa
|
plesniar
| 2025-08-20T21:55:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-18T01:47:13Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
model-index:
- name: zangskari-ipa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zangskari-ipa
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Muapi/golgo-13-the-professional-1983-anime-film-style-f1d-xl
|
Muapi
| 2025-08-20T21:54:39Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:54:32Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Golgo 13: The Professional 1983 Anime Film Style F1D + XL

**Base model**: Flux.1 D
**Trained words**: cartoon
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:913385@1022252", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755725245
|
mang3dd
| 2025-08-20T21:53:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:53:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755725164
|
kojeklollipop
| 2025-08-20T21:52:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:52:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/retro-comic-style-betty-and-me
|
Muapi
| 2025-08-20T21:51:25Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:51:13Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Retro comic style (Betty and me)

**Base model**: Flux.1 D
**Trained words**: oodcomi
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:673335@753754", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/dystopian-vibes-flux
|
Muapi
| 2025-08-20T21:50:52Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:50:05Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dystopian Vibes //Flux

**Base model**: Flux.1 D
**Trained words**: 0y5top1a8e
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:686779@768626", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
CharlieBoyer/gated2
|
CharlieBoyer
| 2025-08-20T21:50:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T21:41:40Z |
---
extra_gated_eu_disallowed: true
---
|
lautan/blockassist-bc-gentle_patterned_goat_1755725027
|
lautan
| 2025-08-20T21:49:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:49:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/photo-factory
|
Muapi
| 2025-08-20T21:49:18Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:48:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Photo Factory

**Base model**: Flux.1 D
**Trained words**: A cinematic photo.
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1719202@1945573", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
AnonymousCS/xlmr_immigration_combo25_4
|
AnonymousCS
| 2025-08-20T21:48:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T21:45:37Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo25_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo25_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1375
- Accuracy: 0.9589
- 1-f1: 0.9375
- 1-recall: 0.9266
- 1-precision: 0.9486
- Balanced Acc: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1014 | 1.0 | 25 | 0.1260 | 0.9614 | 0.9405 | 0.9151 | 0.9673 | 0.9498 |
| 0.0921 | 2.0 | 50 | 0.1511 | 0.9524 | 0.9293 | 0.9382 | 0.9205 | 0.9489 |
| 0.0785 | 3.0 | 75 | 0.1375 | 0.9589 | 0.9375 | 0.9266 | 0.9486 | 0.9508 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755726459
|
esi777
| 2025-08-20T21:48:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:48:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ehristoforu/testgemmaR1
|
ehristoforu
| 2025-08-20T21:48:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3n-E2B-it",
"base_model:finetune:unsloth/gemma-3n-E2B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-20T21:11:53Z |
---
base_model: unsloth/gemma-3n-E2B-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ehristoforu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-E2B-it
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Muapi/holographic-clothes
|
Muapi
| 2025-08-20T21:47:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:47:32Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Holographic Clothes

**Base model**: Flux.1 D
**Trained words**: Holographic Cloth, glowing fractal patterns in the cloth, holographic cloth as if it was drawn by a wire form CAD system showing accurate geometric contours to curved surfaces
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:687006@768881", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
matheoqtb/gemma-3-270m-infonce-only-2824-google-step-2000
|
matheoqtb
| 2025-08-20T21:47:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-20T21:47:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/yumemihoshino-planetarian
|
Muapi
| 2025-08-20T21:47:23Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:47:00Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# YumemiHoshino(ๆ้ๆขฆ็พ)-Planetarian(ๆไนๆขฆ)

**Base model**: Flux.1 D
**Trained words**: Yumemi
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:26134@1201465", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/ob-chinese-style-scroll-painting
|
Muapi
| 2025-08-20T21:46:25Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:45:49Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# OBๅฝ้ฃ็ปๅท Chinese style scroll painting

**Base model**: Flux.1 D
**Trained words**: OBguofeng
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:735268@1132511", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
OpenVINO/Qwen2.5-Coder-1.5B-Instruct-int8-ov
|
OpenVINO
| 2025-08-20T21:44:35Z | 0 | 0 |
transformers
|
[
"transformers",
"openvino",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T21:43:39Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
base_model_relation: quantized
---
# Qwen2.5-Coder-1.5B-Instruct-int8-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
## Description
This is [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) model converted to the [OpenVINOโข IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT8 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT8_ASYM**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINOโข IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.25.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/Qwen2.5-Coder-1.5B-Instruct-int8-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("write a quick sort algorithm.", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/Qwen2.5-Coder-1.5B-Instruct-int8-ov"
model_path = "Qwen2.5-Coder-1.5B-Instruct-int8-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template)
print(pipe.generate("write a quick sort algorithm.", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intelโs Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intelโs products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755724561
|
katanyasekolah
| 2025-08-20T21:44:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:44:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/hdr-high-dynamic-range-style-xl-f1d
|
Muapi
| 2025-08-20T21:44:08Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:43:39Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# HDR "High Dynamic Range" style XL + F1D

**Base model**: Flux.1 D
**Trained words**: HDR style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:610059@953001", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/realistic-photos-detailed-skin-textures-flux-v3
|
Muapi
| 2025-08-20T21:42:47Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:36:11Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Realistic Photos: Detailed Skin&Textures Flux V3

**Base model**: Flux.1 D
**Trained words**: dsv4
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1173967@1770362", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
AnonymousCS/xlmr_immigration_combo25_2
|
AnonymousCS
| 2025-08-20T21:41:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T21:38:09Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo25_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo25_2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2419
- Accuracy: 0.9357
- 1-f1: 0.9031
- 1-recall: 0.8996
- 1-precision: 0.9066
- Balanced Acc: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1452 | 1.0 | 25 | 0.2023 | 0.9319 | 0.8950 | 0.8726 | 0.9187 | 0.9170 |
| 0.1888 | 2.0 | 50 | 0.1938 | 0.9422 | 0.9091 | 0.8687 | 0.9534 | 0.9238 |
| 0.1098 | 3.0 | 75 | 0.2073 | 0.9332 | 0.8976 | 0.8803 | 0.9157 | 0.9199 |
| 0.0768 | 4.0 | 100 | 0.2419 | 0.9357 | 0.9031 | 0.8996 | 0.9066 | 0.9267 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755725886
|
esi777
| 2025-08-20T21:38:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:38:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo25_1
|
AnonymousCS
| 2025-08-20T21:38:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T21:34:17Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo25_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo25_1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Accuracy: 0.9332
- 1-f1: 0.8917
- 1-recall: 0.8263
- 1-precision: 0.9683
- Balanced Acc: 0.9064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2655 | 1.0 | 25 | 0.2387 | 0.9319 | 0.8916 | 0.8417 | 0.9478 | 0.9093 |
| 0.1505 | 2.0 | 50 | 0.2264 | 0.9267 | 0.8844 | 0.8417 | 0.9316 | 0.9054 |
| 0.1509 | 3.0 | 75 | 0.2576 | 0.9242 | 0.8778 | 0.8185 | 0.9464 | 0.8977 |
| 0.1272 | 4.0 | 100 | 0.2641 | 0.9332 | 0.8917 | 0.8263 | 0.9683 | 0.9064 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Jaehun/Qwen2.5-VL-7B-lpt2-sft
|
Jaehun
| 2025-08-20T21:35:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-20T19:36:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MajorJalud/blockassist-bc-fast_bristly_sardine_1755725559
|
MajorJalud
| 2025-08-20T21:35:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fast bristly sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:35:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fast bristly sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_openbookqa_1755694507
|
rbelanec
| 2025-08-20T21:34:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-20T21:00:56Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_openbookqa_1755694507
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_openbookqa_1755694507
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the openbookqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7263
- Num Input Tokens Seen: 3935016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 0.7046 | 0.5002 | 1116 | 0.7015 | 196496 |
| 0.7045 | 1.0004 | 2232 | 0.7008 | 393464 |
| 0.6817 | 1.5007 | 3348 | 0.7119 | 589992 |
| 0.6974 | 2.0009 | 4464 | 0.6949 | 787056 |
| 0.6984 | 2.5011 | 5580 | 0.6978 | 984096 |
| 0.6421 | 3.0013 | 6696 | 0.7007 | 1180920 |
| 0.6968 | 3.5016 | 7812 | 0.6950 | 1378312 |
| 0.6728 | 4.0018 | 8928 | 0.6948 | 1574976 |
| 0.6908 | 4.5020 | 10044 | 0.9289 | 1772096 |
| 0.6442 | 5.0022 | 11160 | 0.6616 | 1969288 |
| 0.5868 | 5.5025 | 12276 | 0.6543 | 2165240 |
| 0.6737 | 6.0027 | 13392 | 0.5839 | 2362584 |
| 0.4501 | 6.5029 | 14508 | 0.5840 | 2558168 |
| 0.5469 | 7.0031 | 15624 | 0.5781 | 2756072 |
| 0.5315 | 7.5034 | 16740 | 0.6050 | 2952520 |
| 0.4052 | 8.0036 | 17856 | 0.5918 | 3149560 |
| 0.9231 | 8.5038 | 18972 | 0.6392 | 3347080 |
| 0.1328 | 9.0040 | 20088 | 0.6744 | 3543488 |
| 0.7252 | 9.5043 | 21204 | 0.7036 | 3741120 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755724129
|
thanobidex
| 2025-08-20T21:34:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:34:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo25_0
|
AnonymousCS
| 2025-08-20T21:34:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T21:29:19Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo25_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo25_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.9203
- 1-f1: 0.8794
- 1-recall: 0.8726
- 1-precision: 0.8863
- Balanced Acc: 0.9084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6354 | 1.0 | 25 | 0.6178 | 0.6671 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.3893 | 2.0 | 50 | 0.3380 | 0.8933 | 0.8223 | 0.7413 | 0.9231 | 0.8552 |
| 0.226 | 3.0 | 75 | 0.2010 | 0.9332 | 0.8917 | 0.8263 | 0.9683 | 0.9064 |
| 0.2149 | 4.0 | 100 | 0.2239 | 0.9113 | 0.8701 | 0.8919 | 0.8493 | 0.9064 |
| 0.165 | 5.0 | 125 | 0.2134 | 0.9203 | 0.8794 | 0.8726 | 0.8863 | 0.9084 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Muapi/bottlingsunshine-style
|
Muapi
| 2025-08-20T21:32:55Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:32:46Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Bottlingsunshine Style

**Base model**: Flux.1 D
**Trained words**:
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:833658@932721", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755724000
|
helmutsukocok
| 2025-08-20T21:32:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:32:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/cortana-xl-sd-1.5-f1d
|
Muapi
| 2025-08-20T21:31:51Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:31:40Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Cortana XL + SD 1.5 + F1D

**Base model**: Flux.1 D
**Trained words**: Cortana
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:200282@1224172", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF
|
mradermacher
| 2025-08-20T21:31:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:Tavernari/git-commit-message-splitter-Qwen3-8B",
"base_model:quantized:Tavernari/git-commit-message-splitter-Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-20T20:34:02Z |
---
base_model: Tavernari/git-commit-message-splitter-Qwen3-8B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#git-commit-message-splitter-Qwen3-8B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
roeker/blockassist-bc-quick_wiry_owl_1755725422
|
roeker
| 2025-08-20T21:31:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:31:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/smoke-flux-sdxl-by-dever
|
Muapi
| 2025-08-20T21:31:06Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:30:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Smoke [Flux / SDXL] by Dever

**Base model**: Flux.1 D
**Trained words**: dvr-smoke-flux
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:309005@1096448", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755723735
|
hakimjustbao
| 2025-08-20T21:27:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:27:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF
|
mradermacher
| 2025-08-20T21:27:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:Tavernari/git-commit-message-splitter-Qwen3-8B",
"base_model:quantized:Tavernari/git-commit-message-splitter-Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T17:05:13Z |
---
base_model: Tavernari/git-commit-message-splitter-Qwen3-8B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#git-commit-message-splitter-Qwen3-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/git-commit-message-splitter-Qwen3-8B-GGUF/resolve/main/git-commit-message-splitter-Qwen3-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VoilaRaj/81_b_zr2R1Z
|
VoilaRaj
| 2025-08-20T21:26:05Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T21:22:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
OpenVINO/Qwen2.5-Coder-0.5B-Instruct-int4-ov
|
OpenVINO
| 2025-08-20T21:25:42Z | 0 | 0 |
transformers
|
[
"transformers",
"openvino",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T21:25:26Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
base_model_relation: quantized
---
# Qwen2.5-Coder-0.5B-Instruct-int4-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct)
## Description
This is [Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) model converted to the [OpenVINOโข IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **INT4_ASYM**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINOโข IR model is compatible with:
* OpenVINO version 2025.2.0 and higher
* Optimum Intel 1.25.0 and higher
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/Qwen2.5-Coder-0.5B-Instruct-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("write a quick sort algorithm.", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [Inference with Optimum Intel](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-optimum-intel.html).
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
1. Install packages required for using OpenVINO GenAI.
```
pip install openvino-genai huggingface_hub
```
2. Download model from HuggingFace Hub
```
import huggingface_hub as hf_hub
model_id = "OpenVINO/Qwen2.5-Coder-0.5B-Instruct-int4-ov"
model_path = "Qwen2.5-Coder-0.5B-Instruct-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
```
3. Run model inference:
```
import openvino_genai as ov_genai
device = "CPU"
pipe = ov_genai.LLMPipeline(model_path, device)
pipe.get_tokenizer().set_chat_template(pipe.get_tokenizer().chat_template)
print(pipe.generate("write a quick sort algorithm.", max_length=200))
```
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai.html) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
You can find more detaild usage examples in OpenVINO Notebooks:
- [LLM](https://openvinotoolkit.github.io/openvino_notebooks/?search=LLM)
- [RAG text generation](https://openvinotoolkit.github.io/openvino_notebooks/?search=RAG+system&tasks=Text+Generation)
## Limitations
Check the original [model card](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) for limitations.
## Legal information
The original model is distributed under [Apache License Version 2.0](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE) license. More details can be found in [Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intelโs Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intelโs products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
Muapi/randommaxx-anime-cyberpunk
|
Muapi
| 2025-08-20T21:25:32Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:25:06Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# RandomMaxx Anime Cyberpunk

**Base model**: Flux.1 D
**Trained words**: anime, cyberpunk
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1414561@1598792", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755725031
|
esi777
| 2025-08-20T21:24:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:24:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/afrofuturism-style-by-dever-flux-sdxl
|
Muapi
| 2025-08-20T21:20:32Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:20:19Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# AfroFuturism Style by Dever [Flux / SDXL]

**Base model**: Flux.1 D
**Trained words**: afrofuturism
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:312620@843855", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755724769
|
esi777
| 2025-08-20T21:20:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:20:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/underlighting-light-from-below-style-xl-f1d
|
Muapi
| 2025-08-20T21:19:08Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T21:18:51Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Underlighting (light from below) style XL + F1D

**Base model**: Flux.1 D
**Trained words**: light from below style, light from below, Underlighting
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:542366@1381951", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755724538
|
Leoar
| 2025-08-20T21:17:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy toothy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T21:17:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy toothy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_b_wTXq1S
|
VoilaRaj
| 2025-08-20T21:17:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T21:13:27Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.