modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ver-full-videos-shirley-arica-Clips/Ver.Viral.video.shirley.arica.polemica.viral.en.twitter.y.telegram
|
Ver-full-videos-shirley-arica-Clips
| 2025-08-19T17:06:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:06:14Z |
[](https://tinyurl.com/bdk3zxvb)
|
AnonymousCS/xlmr_all_immigration3
|
AnonymousCS
| 2025-08-19T17:05:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:59:34Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_all_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_all_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2604
- Accuracy: 0.9200
- 1-f1: 0.8792
- 1-recall: 0.8728
- 1-precision: 0.8856
- Balanced Acc: 0.9082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.617 | 1.0 | 33 | 0.6041 | 0.6663 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.4335 | 2.0 | 66 | 0.2853 | 0.9026 | 0.8477 | 0.8121 | 0.8864 | 0.8800 |
| 0.3011 | 3.0 | 99 | 0.2753 | 0.9007 | 0.8314 | 0.7341 | 0.9585 | 0.8591 |
| 0.2724 | 4.0 | 132 | 0.2583 | 0.9065 | 0.8428 | 0.7514 | 0.9594 | 0.8678 |
| 0.1475 | 5.0 | 165 | 0.2445 | 0.9238 | 0.8805 | 0.8410 | 0.9238 | 0.9032 |
| 0.104 | 6.0 | 198 | 0.2567 | 0.9161 | 0.8672 | 0.8208 | 0.9191 | 0.8923 |
| 0.1543 | 7.0 | 231 | 0.2604 | 0.9200 | 0.8792 | 0.8728 | 0.8856 | 0.9082 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx
|
EZCon
| 2025-08-19T17:03:56Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"unsloth",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-04-18T03:43:44Z |
---
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- unsloth
- mlx
library_name: transformers
---
# EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx
This model was converted to MLX format from [`unsloth/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Mostefa-Terbeche/diabetic-retinopathy-eyepacs-resnet50-original-20250621-170251
|
Mostefa-Terbeche
| 2025-08-19T17:03:49Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:eyepacs",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T16:13:35Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- eyepacs
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: eyepacs_resnet50_original
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: eyepacs
name: EYEPACS
metrics:
- type: accuracy
value: 0.1739254198690578
- type: quadratic-kappa
value: 0.42562284681974993
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the resnet50 architecture on the eyepacs dataset with original preprocessing.
## Model Details
- **Architecture**: resnet50
- **Dataset**: eyepacs
- **Preprocessing**: original
- **Training Date**: 20250621-170251
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: eyepacs_resnet50_20250621-170251_new
## Performance
- **Test Accuracy**: 0.1739254198690578
- **Test Quadratic Kappa**: 0.42562284681974993
- **Validation Kappa**: 0.42562284681974993
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-eyepacs-resnet50-original",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755622837
|
Vasya777
| 2025-08-19T17:01:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:01:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx
|
EZCon
| 2025-08-19T16:58:14Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"chat",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-08-06T03:44:27Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: Qwen/Qwen2-VL-2B-Instruct
tags:
- chat
- abliterated
- uncensored
- mlx
---
# EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-abliterated-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
aleebaster/blockassist-bc-sly_eager_boar_1755621210
|
aleebaster
| 2025-08-19T16:57:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:57:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx
|
EZCon
| 2025-08-19T16:57:56Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"chat",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-08-06T03:35:24Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: Qwen/Qwen2-VL-2B-Instruct
tags:
- chat
- abliterated
- uncensored
- mlx
---
# EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755621027
|
indoempatnol
| 2025-08-19T16:57:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:57:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEOS-19-Dr-Eman-viral-video-Clip/New.full.videos.Dr.Eman.Viral.Video.Official.Tutorial
|
VIDEOS-19-Dr-Eman-viral-video-Clip
| 2025-08-19T16:56:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:56:35Z |
[](https://tinyurl.com/bdk3zxvb)
|
EZCon/LFM2-VL-1.6B-8bit-mlx
|
EZCon
| 2025-08-19T16:56:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lfm2-vl",
"image-text-to-text",
"liquid",
"lfm2",
"edge",
"mlx",
"conversational",
"custom_code",
"en",
"license:other",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-08-17T16:15:12Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- liquid
- lfm2
- lfm2-vl
- edge
- mlx
---
# EZCon/LFM2-VL-1.6B-8bit-mlx
This model was converted to MLX format from [`LiquidAI/LFM2-VL-1.6B`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/LFM2-VL-1.6B-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755620903
|
hakimjustbao
| 2025-08-19T16:55:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:54:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
espnet/lid_voxlingua107_mms_ecapa
|
espnet
| 2025-08-19T16:55:00Z | 6 | 0 |
espnet
|
[
"espnet",
"tensorboard",
"audio",
"language-identification",
"abk",
"afr",
"amh",
"ara",
"asm",
"aze",
"bak",
"bel",
"ben",
"bod",
"bos",
"bre",
"bul",
"cat",
"ceb",
"ces",
"cmn",
"cym",
"dan",
"deu",
"ell",
"eng",
"epo",
"est",
"eus",
"fao",
"fas",
"fin",
"fra",
"glg",
"glv",
"grn",
"guj",
"hat",
"hau",
"haw",
"heb",
"hin",
"hrv",
"hun",
"hye",
"ina",
"ind",
"isl",
"ita",
"jav",
"jpn",
"kan",
"kat",
"kaz",
"khm",
"kor",
"lao",
"lat",
"lav",
"lin",
"lit",
"ltz",
"mal",
"mar",
"mkd",
"mlg",
"mlt",
"mon",
"mri",
"msa",
"mya",
"nep",
"nld",
"nno",
"nor",
"oci",
"pan",
"pol",
"por",
"pus",
"ron",
"rus",
"san",
"sco",
"sin",
"slk",
"slv",
"sna",
"snd",
"som",
"spa",
"sqi",
"srp",
"sun",
"swa",
"swe",
"tam",
"tat",
"tel",
"tgk",
"tgl",
"tha",
"tuk",
"tur",
"ukr",
"urd",
"uzb",
"vie",
"war",
"yid",
"yor",
"dataset:VoxLingua107",
"arxiv:2005.07143",
"license:cc-by-4.0",
"region:us"
] | null | 2025-06-26T04:13:36Z |
---
tags:
- espnet
- audio
- language-identification
language:
- abk
- afr
- amh
- ara
- asm
- aze
- bak
- bel
- ben
- bod
- bos
- bre
- bul
- cat
- ceb
- ces
- cmn
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fao
- fas
- fin
- fra
- glg
- glv
- grn
- guj
- hat
- hau
- haw
- heb
- hin
- hrv
- hun
- hye
- ina
- ind
- isl
- ita
- jav
- jpn
- kan
- kat
- kaz
- khm
- kor
- lao
- lat
- lav
- lin
- lit
- ltz
- mal
- mar
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nld
- nno
- nor
- oci
- pan
- pol
- por
- pus
- ron
- rus
- san
- sco
- sin
- slk
- slv
- sna
- snd
- som
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tat
- tel
- tgk
- tgl
- tha
- tuk
- tur
- ukr
- urd
- uzb
- vie
- war
- yid
- yor
datasets:
- VoxLingua107
license: cc-by-4.0
---
## ESPnet2 Spoken Language Identification (LID) model
### `espnet/lid_voxlingua107_mms_ecapa`
This language identification model was trained using the ESPnet recipe from [ESPnet](https://github.com/espnet/espnet/) toolkit. It leverages the pretrained [MMS-1B](https://huggingface.co/facebook/mms-1b) as the encoder and [ECAPA-TDNN](https://arxiv.org/pdf/2005.07143) as the embedding extractor for robust spoken language identification.
The model is trained on the [VoxLingua107](https://cs.taltech.ee/staff/tanel.alumae/data/voxlingua107/) dataset, which comprises over 6,600 hours of speech spanning 107 languages. Speech segments are sourced from YouTube videos and annotated using metadata.
This repository provides comprehensive training logs, detailed inference results, and model checkpoints for reproducibility and further research.
### Usage Guide: How to use in ESPnet2
#### Prerequisites
First, ensure you have ESPnet installed. If not, follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html).
#### Quick Start
Run the following commands to set up and use the pre-trained model:
```bash
cd espnet
pip install -e .
cd egs2/voxlingua107/lid1
# Download the exp_voxlingua107_raw to egs2/voxlingua107/lid1
hf download espnet/lid_voxlingua107_mms_ecapa --local-dir . --exclude "README.md" "meta.yaml" ".gitattributes"
./run.sh --skip_data_prep false --skip_train true
```
This will download the pre-trained model and run inference using the VoxLingua107 test data.
### Train and Evaluation Datasets
The model is evaluated on multiple language identification benchmarks with diverse characteristics:
| Dataset | Domain | #Langs. Train/Test | Dialect | Training Setup (VL107-only) |
| ------------- | ----------- | ------------------ | ------- | --------------------------- |
| [VoxLingua107](https://cs.taltech.ee/staff/tanel.alumae/data/voxlingua107/) | YouTube | 107/33 | No | Seen |
| [Babel](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=31a13cefb42647e924e0d2778d341decc44c40e9) | Telephone | 25/25 | No | Unseen |
| [FLEURS](https://huggingface.co/datasets/google/xtreme_s) | Read speech | 102/102 | No | Unseen |
| [ML-SUPERB 2.0](https://huggingface.co/datasets/espnet/ml_superb_hf) | Mixed | 137/(137, 8) | Yes | Unseen |
| [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | Parliament | 16/16 | No | Unseen |
### Results
**Accuracy (%) on In-domain and Out-of-domain Test Sets**
<style>
.hf-model-cell {
max-width: 120px;
overflow-x: auto;
white-space: nowrap;
scrollbar-width: thin;
scrollbar-color: #888 #f1f1f1;
}
.config-cell {
max-width: 100px;
overflow-x: auto;
white-space: nowrap;
scrollbar-width: thin;
scrollbar-color: #888 #f1f1f1;
}
.hf-model-cell::-webkit-scrollbar,
.config-cell::-webkit-scrollbar {
height: 6px;
}
.hf-model-cell::-webkit-scrollbar-track,
.config-cell::-webkit-scrollbar-track {
background: #f1f1f1;
border-radius: 3px;
}
.hf-model-cell::-webkit-scrollbar-thumb,
.config-cell::-webkit-scrollbar-thumb {
background: #888;
border-radius: 3px;
}
.hf-model-cell::-webkit-scrollbar-thumb:hover,
.config-cell::-webkit-scrollbar-thumb:hover {
background: #555;
}
</style>
<div style="overflow-x: auto;">
| ESPnet Recipe | Config | VoxLingua107 | Babel | FLEURS | ML-SUPERB2.0 Dev | ML-SUPERB2.0 Dialect | VoxPopuli | Macro Avg. |
| ------------------------- | ----------- | ------------ | ----- | ------ | ---------------- | -------------------- | --------- | ---------- |
| <div class="hf-model-cell">[egs2/voxlingua107/lid1](https://github.com/espnet/espnet/tree/master/egs2/voxlingua107/lid1)</div> | <div class="config-cell">`conf/mms_ecapa_baseline`</div> | 94.2 | 86.7 | 95.8 | 89.0 | 73.4 | 85.6 | 87.5 |
</div>
For more detailed inference results, please refer to the `exp_voxlingua107_raw/lid_mms_ecapa_baseline_raw/inference` directory in this repository.
> **Note (2025-08-18):**
> The corresponding GitHub repository has not yet been merged into the ESPnet master branch.
> See [PR #6174](https://github.com/espnet/espnet/pull/6174) for the latest updates.
## LID config
<details><summary>expand</summary>
```
config: /work/nvme/bbjs/qwang20/espnet/egs2/lid_delta/lid1/conf/mms_1b_ecapa/mms_ecapa_bs3min_baseline.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: category
valid_iterator_type: category
output_dir: exp_voxlingua107_raw/lid_mms_ecapa_bs3min_baseline_delta_raw
ngpu: 1
seed: 3702
num_workers: 8
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
use_deepspeed: false
deepspeed_config: null
gradient_as_bucket_view: true
ddp_comm_hook: null
cudnn_enabled: true
cudnn_benchmark: true
cudnn_deterministic: false
use_tf32: false
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- accuracy
- max
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: 9999
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 100
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: true
wandb_project: lid
wandb_id: null
wandb_entity: qingzhew-carnegie-mellon-university
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_adapter: false
adapter: lora
save_strategy: all
adapter_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 2880000
valid_batch_bins: null
category_sample_size: 10
train_shape_file:
- exp_voxlingua107_raw/lid_stats_16k/train/speech_shape
valid_shape_file:
- exp_voxlingua107_raw/lid_stats_16k/valid/speech_shape
batch_type: catpow
upsampling_factor: 0.5
language_upsampling_factor: 0.5
dataset_upsampling_factor: 0.5
dataset_scaling_factor: 1.2
max_batch_size: 16
valid_batch_type: null
fold_length:
- 120000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
chunk_max_abs_length: null
chunk_discard_short_samples: true
train_data_path_and_name_and_type:
- - dump/raw/train_voxlingua107/wav.scp
- speech
- sound
- - dump/raw/train_voxlingua107/utt2lang
- lid_labels
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_voxlingua107/wav.scp
- speech
- sound
- - dump/raw/dev_voxlingua107/utt2lang
- lid_labels
- text
multi_task_dataset: false
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 5.0e-06
betas:
- 0.9
- 0.98
scheduler: tristagelr
scheduler_conf:
max_steps: 30000
warmup_ratio: 0.3
hold_ratio: 0.2
decay_ratio: 0.5
init_lr_scale: 0.6
final_lr_scale: 0.1
init: null
use_preprocessor: true
input_size: null
target_duration: 3.0
lang2utt: dump/raw/train_voxlingua107/lang2utt
lang_num: 107
sample_rate: 16000
num_eval: 10
rir_scp: ''
model: espnet
model_conf:
extract_feats_in_collect_stats: false
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hf_wav2vec2_custom
path_or_url: facebook/mms-1b
download_dir: ./hub
multilayer_feature: true
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf:
norm_vars: false
encoder: ecapa_tdnn
encoder_conf:
model_scale: 8
ndim: 512
output_size: 1536
pooling: chn_attn_stat
pooling_conf: {}
projector: rawnet3
projector_conf:
output_size: 192
encoder_condition: rawnet3
encoder_condition_conf: {}
pooling_condition: chn_attn_stat
pooling_condition_conf: {}
projector_condition: rawnet3
projector_condition_conf: {}
preprocessor: lid
preprocessor_conf:
fix_duration: false
sample_rate: 16000
noise_apply_prob: 0.0
noise_info:
- - 1.0
- dump/raw/musan_speech.scp
- - 4
- 7
- - 13
- 20
- - 1.0
- dump/raw/musan_noise.scp
- - 1
- 1
- - 0
- 15
- - 1.0
- dump/raw/musan_music.scp
- - 1
- 1
- - 5
- 15
rir_apply_prob: 0.0
rir_scp: dump/raw/rirs.scp
loss: aamsoftmax_sc_topk
loss_conf:
margin: 0.5
scale: 30
K: 3
mp: 0.06
k_top: 5
required:
- output_dir
version: '202412'
distributed: false
```
</details>
### Citation
```BibTex
@inproceedings{wang2025geolid,
author={Qingzheng Wang, Hye-jin Shim, Jiancheng Sun, and Shinji Watanabe},
title={Geolocation-Aware Robust Spoken Language Identification},
year={2025},
booktitle={Procedings of ASRU},
}
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
|
EZCon/SmolVLM2-500M-Video-Instruct-mlx
|
EZCon
| 2025-08-19T16:54:59Z | 71 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smolvlm",
"image-text-to-text",
"mlx",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"dataset:lmms-lab/M4-Instruct-Data",
"dataset:HuggingFaceFV/finevideo",
"dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M",
"dataset:lmms-lab/LLaVA-Video-178K",
"dataset:orrzohar/Video-STaR",
"dataset:Mutonix/Vript",
"dataset:TIGER-Lab/VISTA-400K",
"dataset:Enxin/MovieChat-1K_train",
"dataset:ShareGPT4Video/ShareGPT4Video",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-01T17:50:26Z |
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- lmms-lab/LLaVA-OneVision-Data
- lmms-lab/M4-Instruct-Data
- HuggingFaceFV/finevideo
- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
- lmms-lab/LLaVA-Video-178K
- orrzohar/Video-STaR
- Mutonix/Vript
- TIGER-Lab/VISTA-400K
- Enxin/MovieChat-1K_train
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: image-text-to-text
language:
- en
base_model:
- HuggingFaceTB/SmolVLM-500M-Instruct
tags:
- mlx
---
# EZCon/SmolVLM2-500M-Video-Instruct-mlx
This model was converted to MLX format from [`HuggingFaceTB/SmolVLM2-500M-Video-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/SmolVLM2-500M-Video-Instruct-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
EZCon/SmolVLM2-2.2B-Instruct-4bit-mlx
|
EZCon
| 2025-08-19T16:54:17Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smolvlm",
"image-text-to-text",
"video-text-to-text",
"mlx",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"dataset:lmms-lab/M4-Instruct-Data",
"dataset:HuggingFaceFV/finevideo",
"dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M",
"dataset:lmms-lab/LLaVA-Video-178K",
"dataset:orrzohar/Video-STaR",
"dataset:Mutonix/Vript",
"dataset:TIGER-Lab/VISTA-400K",
"dataset:Enxin/MovieChat-1K_train",
"dataset:ShareGPT4Video/ShareGPT4Video",
"base_model:HuggingFaceTB/SmolVLM-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-08-01T02:41:44Z |
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- lmms-lab/LLaVA-OneVision-Data
- lmms-lab/M4-Instruct-Data
- HuggingFaceFV/finevideo
- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
- lmms-lab/LLaVA-Video-178K
- orrzohar/Video-STaR
- Mutonix/Vript
- TIGER-Lab/VISTA-400K
- Enxin/MovieChat-1K_train
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: image-text-to-text
tags:
- video-text-to-text
- mlx
language:
- en
base_model:
- HuggingFaceTB/SmolVLM-Instruct
---
# EZCon/SmolVLM2-2.2B-Instruct-4bit-mlx
This model was converted to MLX format from [`HuggingFaceTB/SmolVLM2-2.2B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/SmolVLM2-2.2B-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic1
|
nabilwalidrafi
| 2025-08-19T16:53:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T12:27:04Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-skinlesion-rafi-4-4-augdynamic1
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-skinlesion-rafi-4-4-augdynamic1
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nabilwalidrafi/medgemma-skinlesion-rafi-4-4-augdynamic1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kudozz/t5-citation-agent
|
kudozz
| 2025-08-19T16:53:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-16T07:52:08Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-citation-agent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-citation-agent
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2487
- Rouge1: 37.54
- Rouge2: 32.5
- Rougel: 37.23
- Rougelsum: 37.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 0.4929 | 2.0 | 500 | 0.3395 | 30.42 | 24.44 | 29.83 | 29.85 |
| 0.3738 | 4.0 | 1000 | 0.2487 | 37.54 | 32.5 | 37.23 | 37.19 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kokoblueao/blockassist-bc-trotting_bipedal_cobra_1755622193
|
kokoblueao
| 2025-08-19T16:51:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"trotting bipedal cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:51:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- trotting bipedal cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Prathyusha101/tldr-ppco-g0p95-l1p0
|
Prathyusha101
| 2025-08-19T16:44:46Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"dataset:trl-internal-testing/tldr-preference-sft-trl-style",
"arxiv:1909.08593",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T11:17:59Z |
---
datasets: trl-internal-testing/tldr-preference-sft-trl-style
library_name: transformers
model_name: tldr-ppco-g0p95-l1p0
tags:
- generated_from_trainer
licence: license
---
# Model Card for tldr-ppco-g0p95-l1p0
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-internal-testing/tldr-preference-sft-trl-style](https://huggingface.co/datasets/trl-internal-testing/tldr-preference-sft-trl-style) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prathyusha101/tldr-ppco-g0p95-l1p0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prathyusha1-the-university-of-texas-at-austin/huggingface/runs/poeo9cdz)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.53.1
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755620608
|
Sayemahsjn
| 2025-08-19T16:43:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:43:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kokoblueao/blockassist-bc-trotting_bipedal_cobra_1755621669
|
kokoblueao
| 2025-08-19T16:42:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"trotting bipedal cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:42:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- trotting bipedal cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VER-milica-y-angel-david-debut-video/video.filtrado.milica.y.angel.david.debut.clip.viral.completo.en.twitter.y.telegram
|
VER-milica-y-angel-david-debut-video
| 2025-08-19T16:40:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:39:51Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
phospho-app/Deimos252-ACT_BBOX-Light_dataset_deimos-6r50d
|
phospho-app
| 2025-08-19T16:40:13Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/Light_dataset_deimos_bboxes",
"region:us"
] |
robotics
| 2025-08-19T16:15:06Z |
---
datasets: phospho-app/Light_dataset_deimos_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/Light_dataset_deimos_bboxes](https://huggingface.co/datasets/phospho-app/Light_dataset_deimos_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
joackimagno/MASID-v1
|
joackimagno
| 2025-08-19T16:39:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:joackimagno/Qwen-2.5-General-Recipe-Generation",
"base_model:finetune:joackimagno/Qwen-2.5-General-Recipe-Generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:27:29Z |
---
base_model: joackimagno/Qwen-2.5-General-Recipe-Generation
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** joackimagno
- **License:** apache-2.0
- **Finetuned from model :** joackimagno/Qwen-2.5-General-Recipe-Generation
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-08-19T16:38:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:fengpeisheng1/mergekit-slerp-ariyvyf",
"base_model:quantized:fengpeisheng1/mergekit-slerp-ariyvyf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-19T16:30:50Z |
---
base_model: fengpeisheng1/mergekit-slerp-ariyvyf
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF
This model was converted to GGUF format from [`fengpeisheng1/mergekit-slerp-ariyvyf`](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048
```
|
mohan1201/gemma-code-explainer
|
mohan1201
| 2025-08-19T16:38:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/gemma-2b-it",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:google/gemma-2b-it",
"license:gemma",
"region:us"
] |
text-generation
| 2025-08-19T16:38:01Z |
---
library_name: peft
license: gemma
base_model: google/gemma-2b-it
tags:
- base_model:adapter:google/gemma-2b-it
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: gemma-code-explainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-code-explainer
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 150
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
exala/db_auto_6.1.2e
|
exala
| 2025-08-19T16:37:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:37:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v4
|
concept-unlearning
| 2025-08-19T16:37:02Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-08T12:21:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
intimo-video-de-lalama-y-snayder-abigail/filtrado.video.de.abigail.lalama.y.snayder.influencer.viral
|
intimo-video-de-lalama-y-snayder-abigail
| 2025-08-19T16:36:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:36:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
OpenBuddy/SimpleChat-4B-V1
|
OpenBuddy
| 2025-08-19T16:36:08Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"region:us"
] |
text-generation
| 2025-08-19T16:23:21Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
tags:
- qwen3
pipeline_tag: text-generation
base_model: Qwen/Qwen3-4B
---
### ✨ About the SimpleChat Model Series
The SimpleChat series represents our new exploration into Non-Chain-of-Thought (Non-CoT) models. Its main features are:
* **Distinct Chat Style:**
* Designed to be concise, rational, and empathetic.
* Specifically built for casual, everyday conversations.
* **Enhanced Creativity:**
* Boosts the creativity of its generated content and its capacity for emotional understanding.
* This is achieved by distilling knowledge from advanced models, including K2.
* **Efficient Reasoning within a Non-CoT Framework:**
* Delivers the faster response times of a Non-CoT model while preserving strong reasoning skills.
* It retains this ability because it was trained on CoT models before being transitioned to a Non-CoT framework, allowing it to think through complex problems.
* **Known Trade-off:**
* Compared to models that specialize in Chain-of-Thought, it may not perform as strongly on mathematical tasks.
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Model Info
Context Length: **40K** Tokens
License: Apache 2.0
Optimizer: **Muon + AdamW**
# Prompt Format
This model supports a **Qwen3-like** prompt format, with following system prompt recommended:
```
You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
```
Raw prompt template:
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{history_input}<|im_end|>
<|im_start|>assistant
{history_output}<|im_end|>
<|im_start|>user
{current_input}<|im_end|>
<|im_start|>assistant
```
(There should be a `\n` at the end of prompt.)
You may want to use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
dgambettaphd/M_mis_run2_gen1_WXS_doc1000_synt64_lr1e-04_acm_LANG
|
dgambettaphd
| 2025-08-19T16:34:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:34:35Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/gemma3-4b-skin-cancer-classifier-GGUF
|
mradermacher
| 2025-08-19T16:33:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:doriankim/gemma3-4b-skin-cancer-classifier",
"base_model:quantized:doriankim/gemma3-4b-skin-cancer-classifier",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T16:17:31Z |
---
base_model: doriankim/gemma3-4b-skin-cancer-classifier
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/doriankim/gemma3-4b-skin-cancer-classifier
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gemma3-4b-skin-cancer-classifier-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AnonymousCS/xlmr_norwegian_immigration2
|
AnonymousCS
| 2025-08-19T16:32:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:23:06Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_norwegian_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_norwegian_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9231
- 1-f1: 0.8810
- 1-recall: 0.8605
- 1-precision: 0.9024
- Balanced Acc: 0.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6746 | 1.0 | 5 | 0.6397 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5485 | 2.0 | 10 | 0.6313 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6165 | 3.0 | 15 | 0.6220 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.7306 | 4.0 | 20 | 0.6108 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.604 | 5.0 | 25 | 0.5968 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5031 | 6.0 | 30 | 0.5714 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5496 | 7.0 | 35 | 0.5302 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5351 | 8.0 | 40 | 0.4655 | 0.7769 | 0.4912 | 0.3256 | 1.0 | 0.6628 |
| 0.4308 | 9.0 | 45 | 0.3942 | 0.8538 | 0.7246 | 0.5814 | 0.9615 | 0.7850 |
| 0.3575 | 10.0 | 50 | 0.3077 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.2808 | 11.0 | 55 | 0.2337 | 0.9308 | 0.8861 | 0.8140 | 0.9722 | 0.9012 |
| 0.2272 | 12.0 | 60 | 0.2053 | 0.9308 | 0.8889 | 0.8372 | 0.9474 | 0.9071 |
| 0.2462 | 13.0 | 65 | 0.2418 | 0.9 | 0.8539 | 0.8837 | 0.8261 | 0.8959 |
| 0.1188 | 14.0 | 70 | 0.2207 | 0.9231 | 0.8810 | 0.8605 | 0.9024 | 0.9072 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ChenWu98/statement_deepseek_v1.5_sft_cluster_split_0
|
ChenWu98
| 2025-08-19T16:30:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:deepseek-ai/DeepSeek-Prover-V1.5-SFT",
"base_model:finetune:deepseek-ai/DeepSeek-Prover-V1.5-SFT",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:20:56Z |
---
base_model: deepseek-ai/DeepSeek-Prover-V1.5-SFT
library_name: transformers
model_name: statement_deepseek_v1.5_sft_cluster_split_0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for statement_deepseek_v1.5_sft_cluster_split_0
This model is a fine-tuned version of [deepseek-ai/DeepSeek-Prover-V1.5-SFT](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V1.5-SFT).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/goggpbak)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kyoukarawattsu/blockassist-bc-tenacious_arctic_manatee_1755620807
|
kyoukarawattsu
| 2025-08-19T16:28:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tenacious arctic manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:28:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tenacious arctic manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755619195
|
ihsanridzi
| 2025-08-19T16:26:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:26:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
grgazziz/mosquito
|
grgazziz
| 2025-08-19T16:22:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-19T16:21:02Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755618948
|
lisaozill03
| 2025-08-19T16:22:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:22:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755620494
|
Elizavr
| 2025-08-19T16:22:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:21:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arshal13/echomimic-models
|
arshal13
| 2025-08-19T16:21:24Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T16:15:45Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
base_model:
- openai/gpt-oss-120b
---
|
oceanfish/intent_classify_slot
|
oceanfish
| 2025-08-19T16:20:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-08-19T16:15:20Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755618622
|
thanobidex
| 2025-08-19T16:17:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:17:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haji80mr-uoft/semi-wotype-Llama-tuned-Lora-only-V0
|
haji80mr-uoft
| 2025-08-19T16:16:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:16:08Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** haji80mr-uoft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chansung/Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
|
chansung
| 2025-08-19T16:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:59:03Z |
---
base_model: google/gemma-2-2b-it
datasets: chansung/verifiable-coding-problems-python-v2
library_name: transformers
model_name: Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chansung/Gemma2-2B-CCRL-CUR-EDGE-ONLY-1E", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/6a4vn02u)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Mostefa-Terbeche/diabetic-retinopathy-aptos-resnet50-advanced-20250618-162329
|
Mostefa-Terbeche
| 2025-08-19T16:13:34Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:aptos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-19T15:23:50Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- aptos
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: aptos_resnet50_advanced
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: aptos
name: APTOS
metrics:
- type: accuracy
value: 0.7759562841530054
- type: quadratic-kappa
value: 0.8835158192633705
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the resnet50 architecture on the aptos dataset with advanced preprocessing.
## Model Details
- **Architecture**: resnet50
- **Dataset**: aptos
- **Preprocessing**: advanced
- **Training Date**: 20250618-162329
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: aptos_resnet50_20250618-162329_new
## Performance
- **Test Accuracy**: 0.7759562841530054
- **Test Quadratic Kappa**: 0.8835158192633705
- **Validation Kappa**: 0.8835158192633705
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-aptos-resnet50-advanced",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
schirrmacher/malwi
|
schirrmacher
| 2025-08-19T16:10:50Z | 2,023 | 0 | null |
[
"safetensors",
"distilbert",
"arxiv:2404.04991",
"arxiv:2504.14886",
"license:mit",
"region:us"
] | null | 2025-05-09T12:54:09Z |
---
license: mit
---
# malwi - AI Python Malware Scanner
<img src="malwi-logo.png" alt="Logo">
## malwi specializes in finding malware
### Key Features
- 🛡️ **AI-Powered Python Malware Detection**: Leverages advanced AI to identify malicious code in Python projects with high accuracy.
- ⚡ **Lightning-Fast Codebase Scanning**: Scans entire repositories in seconds, so you can focus on development—not security worries.
- 🔒 **100% Offline & Private**: Your code never leaves your machine. Full control, zero data exposure.
- 💰 **Free & Open-Source**: No hidden costs. Built on transparent research and openly available data.
- 🇪🇺 **Developed in the EU**: Committed to open-source principles and European data standards.
### 1) Install
```
pip install --user malwi
```
### 2) Run
```bash
malwi scan examples/malicious
```
### 3) Evaluate: a [recent zero-day](https://socket.dev/blog/malicious-pypi-package-targets-discord-developers-with-RAT) detected with high confidence
```
__ __
.--------.---.-| .--.--.--|__|
| | _ | | | | | |
|__|__|__|___._|__|________|__|
AI Python Malware Scanner
- target: examples
- seconds: 1.87
- files: 14
├── scanned: 4 (.py)
├── skipped: 10 (.cfg, .md, .toml, .txt)
└── suspicious:
├── examples/malicious/discordpydebug-0.0.4/setup.py
│ └── <module>
│ ├── archive compression
│ └── package installation execution
└── examples/malicious/discordpydebug-0.0.4/src/discordpydebug/__init__.py
├── <module>
│ ├── process management
│ ├── deserialization
│ ├── system interaction
│ └── user io
├── run
│ └── fs linking
├── debug
│ ├── fs linking
│ └── archive compression
└── runcommand
└── process management
=> 👹 malicious 0.98
```
## PyPI Package Scanning
malwi can directly scan PyPI packages without executing malicious logic, typically placed in `setup.py` or `__init__.py` files:
```bash
malwi pypi requests
````
```
__ __
.--------.---.-| .--.--.--|__|
| | _ | | | | | |
|__|__|__|___._|__|________|__|
AI Python Malware Scanner
- target: downloads/requests-2.32.4.tar
- seconds: 3.10
- files: 84
├── scanned: 34
└── skipped: 50
=> 🟢 good
```
## Python API
malwi provides a comprehensive Python API for integrating malware detection into your applications.
### Quick Start
```python
import malwi
report = malwi.MalwiReport.create(input_path="suspicious_file.py")
for obj in report.malicious_objects:
print(f"File: {obj.file_path}")
```
### `MalwiReport`
```python
MalwiReport.create(
input_path, # str or Path - file/directory to scan
accepted_extensions=None, # List[str] - file extensions to scan (e.g., ['py', 'js'])
silent=False, # bool - suppress progress messages
malicious_threshold=0.7, # float - threshold for malicious classification (0.0-1.0)
on_finding=None # callable - callback when malicious objects found
) -> MalwiReport # Returns: MalwiReport instance with scan results
```
```python
import malwi
report = malwi.MalwiReport.create("suspicious_directory/")
# Properties
report.malicious # bool: True if malicious objects detected
report.confidence # float: Overall confidence score (0.0-1.0)
report.duration # float: Scan duration in seconds
report.all_objects # List[MalwiObject]: All analyzed code objects
report.malicious_objects # List[MalwiObject]: Objects exceeding threshold
report.threshold # float: Maliciousness threshold used (0.0-1.0)
report.all_files # List[Path]: All files found in input path
report.skipped_files # List[Path]: Files skipped (wrong extension)
report.processed_files # int: Number of files successfully processed
report.activities # List[str]: Suspicious activities detected
report.input_path # str: Original input path scanned
report.start_time # str: ISO 8601 timestamp when scan started
report.all_file_types # List[str]: All file extensions found
report.version # str: Malwi version with model hash
# Methods
report.to_demo_text() # str: Human-readable tree summary
report.to_json() # str: JSON formatted report
report.to_yaml() # str: YAML formatted report
report.to_markdown() # str: Markdown formatted report
# Pre-load models to avoid delay on first prediction
malwi.MalwiReport.load_models_into_memory()
```
### `MalwiObject`
```python
obj = report.all_objects[0]
# Core properties
obj.name # str: Function/class/module name
obj.file_path # str: Path to source file
obj.language # str: Programming language ('python'/'javascript')
obj.maliciousness # float|None: ML confidence score (0.0-1.0)
obj.warnings # List[str]: Compilation warnings/errors
# Source code and AST compilation
obj.file_source_code # str: Complete content of source file
obj.source_code # str|None: Extracted source for this specific object
obj.byte_code # List[Instruction]|None: Compiled AST bytecode
obj.location # Tuple[int,int]|None: Start and end line numbers
obj.embedding_count # int: Number of DistilBERT tokens (cached)
# Analysis methods
obj.predict() # dict: Run ML prediction and update maliciousness
obj.to_tokens() # List[str]: Extract tokens for analysis
obj.to_token_string() # str: Space-separated token string
obj.to_string() # str: Bytecode as readable string
obj.to_hash() # str: SHA256 hash of bytecode
obj.to_dict() # dict: Serializable representation
obj.to_yaml() # str: YAML formatted output
obj.to_json() # str: JSON formatted output
# Class methods
MalwiObject.all_tokens(language="python") # List[str]: All possible tokens
```
## Why malwi?
Malicious actors are increasingly [targeting open-source projects](https://arxiv.org/pdf/2404.04991), introducing packages designed to compromise security.
Common malicious behaviors include:
- **Data exfiltration**: Theft of sensitive information such as credentials, API keys, or user data.
- **Backdoors**: Unauthorized remote access to systems, enabling attackers to exploit vulnerabilities.
- **Destructive actions**: Deliberate sabotage, including file deletion, database corruption, or application disruption.
## How does it work?
malwi is based on the design of [_Zero Day Malware Detection with Alpha: Fast DBI with Transformer Models for Real World Application_ (2025)](https://arxiv.org/pdf/2504.14886v1).
Imagine there is a function like:
```python
def runcommand(value):
output = subprocess.run(value, shell=True, capture_output=True)
return [output.stdout, output.stderr]
```
### 1. Files are compiled to create an Abstract Syntax Tree with [Tree-sitter](https://tree-sitter.github.io/tree-sitter/index.html)
```
module [0, 0] - [3, 0]
function_definition [0, 0] - [2, 41]
name: identifier [0, 4] - [0, 14]
parameters: parameters [0, 14] - [0, 21]
identifier [0, 15] - [0, 20]
...
```
### 2. The AST is transpiled to dummy bytecode
The bytecode is enhanced with security related instructions.
```
TARGETED_FILE PUSH_NULL LOAD_GLOBAL PROCESS_MANAGEMENT LOAD_ATTR run LOAD_PARAM value LOAD_CONST BOOLEAN LOAD_CONST BOOLEAN KW_NAMES shell capture_output CALL STRING_VERSION STORE_GLOBAL output LOAD_GLOBAL output LOAD_ATTR stdout LOAD_GLOBAL output LOAD_ATTR stderr BUILD_LIST STRING_VERSION RETURN_VALUE
```
### 3. The bytecode is fed into a pre-trained [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)
A DistilBERT model trained on [malware-samples](https://github.com/schirrmacher/malwi-samples) is used to identify suspicious code patterns.
```
=> Maliciousness: 0.98
```
## Benchmarks?
```
training_loss: 0.0110
epochs_completed: 3.0000
original_train_samples: 598540.0000
windowed_train_features: 831865.0000
original_validation_samples: 149636.0000
windowed_validation_features: 204781.0000
benign_samples_used: 734930.0000
malicious_samples_used: 13246.0000
benign_to_malicious_ratio: 60.0000
vocab_size: 30522.0000
max_length: 512.0000
window_stride: 128.0000
batch_size: 16.0000
eval_loss: 0.0107
eval_accuracy: 0.9980
eval_f1: 0.9521
eval_precision: 0.9832
eval_recall: 0.9229
eval_runtime: 115.5982
eval_samples_per_second: 1771.4900
eval_steps_per_second: 110.7200
epoch: 3.0000
```
## Contributing & Support
- Found a bug or have a feature request? [Open an issue](https://github.com/schirrmacher/malwi/issues).
- Do you have access to malicious packages in Rust, Go, or other languages? [Contact via GitHub profile](https://github.com/schirrmacher).
- Struggling with false-positive findings? [Create a Pull-Request](https://github.com/schirrmacher/malwi-samples/pulls).
## Research
### Prerequisites
1. **Package Manager**: Install [uv](https://docs.astral.sh/uv/) for fast Python dependency management
2. **Training Data**: The research CLI will automatically clone [malwi-samples](https://github.com/schirrmacher/malwi-samples) when needed
### Quick Start
```bash
# Install dependencies
uv sync
# Run tests
uv run pytest tests
# Train a model from scratch (full pipeline with automatic data download)
./research download preprocess train
```
#### Individual Pipeline Steps
```bash
# 1. Download training data (clones malwi-samples + downloads repositories)
./research download
# 2. Data preprocessing only (parallel processing, ~4 min on 32 cores)
./research preprocess --language python
# 3. Model training only (tokenizer + DistilBERT, ~40 minutes on NVIDIA RTX 4090)
./research train
```
## Limitations
The malicious dataset includes some boilerplate functions, such as setup functions, which can also appear in benign code. These cause false positives during scans. The goal is to triage and reduce such false positives to improve malwi's accuracy.
## What's next?
The first iteration focuses on **maliciousness of Python source code**.
Future iterations will cover malware scanning for more languages (JavaScript, Rust, Go) and more formats (binaries, logs).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755619661
|
lqpl
| 2025-08-19T16:09:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:09:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755618074
|
helmutsukocok
| 2025-08-19T16:08:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:08:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/git-commit-message-splitter-Qwen3-4B-i1-GGUF
|
mradermacher
| 2025-08-19T16:08:07Z | 0 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-08-19T16:08:01Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-4B
|
mehdirafiei/bert_resume_category_prediction
|
mehdirafiei
| 2025-08-19T16:07:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:07:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_finnish_immigration2
|
AnonymousCS
| 2025-08-19T16:04:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:00:05Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_finnish_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_finnish_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- Accuracy: 0.9538
- 1-f1: 0.9318
- 1-recall: 0.9535
- 1-precision: 0.9111
- Balanced Acc: 0.9538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.5778 | 1.0 | 5 | 0.2275 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 |
| 0.1219 | 2.0 | 10 | 0.3406 | 0.9385 | 0.9130 | 0.9767 | 0.8571 | 0.9481 |
| 0.2571 | 3.0 | 15 | 0.2051 | 0.9462 | 0.9213 | 0.9535 | 0.8913 | 0.9480 |
| 0.1514 | 4.0 | 20 | 0.1689 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 |
| 0.1368 | 5.0 | 25 | 0.1816 | 0.9462 | 0.9231 | 0.9767 | 0.875 | 0.9539 |
| 0.1073 | 6.0 | 30 | 0.1698 | 0.9538 | 0.9318 | 0.9535 | 0.9111 | 0.9538 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
rambetiko/blockassist-bc-soft_lanky_marmot_1755618848
|
rambetiko
| 2025-08-19T16:00:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:59:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_SVt_l24_lr2e-4_a256_2E_technical-engineering2_KLBPA_5e6
|
annasoli
| 2025-08-19T15:59:44Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T14:51:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755617165
|
ihsanridzi
| 2025-08-19T15:53:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:53:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755616921
|
lisaozill03
| 2025-08-19T15:49:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:48:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jacoboss/MyGemmaNPC
|
jacoboss
| 2025-08-19T15:48:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T21:28:50Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jacoboss/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v2
|
concept-unlearning
| 2025-08-19T15:48:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:46:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755618244
|
Elizavr
| 2025-08-19T15:44:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:44:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755616194
|
unitova
| 2025-08-19T15:37:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:37:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Christopher-Lim/Butter
|
Christopher-Lim
| 2025-08-19T15:37:35Z | 0 | 0 | null |
[
"object-detection",
"dataset:rafaelpadilla/coco2017",
"dataset:nateraw/kitti",
"dataset:Chris1/cityscapes",
"dataset:dgural/bdd100k",
"arxiv:2507.13373",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2025-08-19T15:09:15Z |
---
license: agpl-3.0
datasets:
- rafaelpadilla/coco2017
- nateraw/kitti
- Chris1/cityscapes
- dgural/bdd100k
metrics:
- precision
- f1
- recall
pipeline_tag: object-detection
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Butter is a novel 2D object detection framework designed to enhance hierarchical feature representations for improved detection robustness.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Xiaojian Lin et al.]
- **Funded by:** [National Natural Science Foundation of China]
- **Model type:** [Object Detection]
- **License:** [AGPL-3.0 license]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/Aveiro-Lin/Butter]
- **Paper:** [https://www.arxiv.org/pdf/2507.13373]
## Uses
The training and inference details, as well as the environment configuration, can be found in our GitHub repository, where a comprehensive description is provided. The model’s performance metrics and training details are thoroughly described in the paper we provide.
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755616149
|
vwzyrraz7l
| 2025-08-19T15:36:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:36:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755616023
|
helmutsukocok
| 2025-08-19T15:33:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:33:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755615849
|
chainway9
| 2025-08-19T15:33:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:33:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phospho-app/z1c0-gr00t-pick_and_place-mrulf
|
phospho-app
| 2025-08-19T15:32:40Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:z1c0/pick_and_place",
"region:us"
] |
robotics
| 2025-08-19T10:16:01Z |
---
datasets: z1c0/pick_and_place
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [z1c0/pick_and_place](https://huggingface.co/datasets/z1c0/pick_and_place)
- **Wandb run URL**: None
- **Epochs**: 5
- **Batch size**: 8
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Ba2han/qwen3-a3b-merged-coder-experiment
|
Ba2han
| 2025-08-19T15:27:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:Qwen/Qwen3-30B-A3B-Thinking-2507",
"base_model:merge:Qwen/Qwen3-30B-A3B-Thinking-2507",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:merge:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:13:02Z |
---
base_model:
- unsloth/Qwen3-Coder-30B-A3B-Instruct
- Qwen/Qwen3-30B-A3B-Thinking-2507
library_name: transformers
tags:
- mergekit
- merge
---
# output_new_merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using merged_model as a base.
### Models Merged
The following models were included in the merge:
* [unsloth/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct)
* [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: "merged_model"
- model: Qwen/Qwen3-30B-A3B-Thinking-2507
parameters:
density: 0.35
weight: 0.35
- model: unsloth/Qwen3-Coder-30B-A3B-Instruct
parameters:
density: 0.25
weight: 0.25
merge_method: dare_ties
base_model: "merged_model"
parameters:
int8_mask: true
dtype: bfloat16
```
|
Noredine67/mon-redacteur-evaluation-externe-Q8_0-GGUF
|
Noredine67
| 2025-08-19T15:24:17Z | 0 | 0 |
peft
|
[
"peft",
"gguf",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"base_model:Noredine67/mon-redacteur-evaluation-externe",
"base_model:adapter:Noredine67/mon-redacteur-evaluation-externe",
"region:us"
] |
text-generation
| 2025-08-19T15:24:15Z |
---
base_model: Noredine67/mon-redacteur-evaluation-externe
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
- llama-cpp
- gguf-my-lora
---
# Noredine67/mon-redacteur-evaluation-externe-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`Noredine67/mon-redacteur-evaluation-externe`](https://huggingface.co/Noredine67/mon-redacteur-evaluation-externe) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Noredine67/mon-redacteur-evaluation-externe) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora mon-redacteur-evaluation-externe-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora mon-redacteur-evaluation-externe-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
vohuutridung/bartpho-word-vietnews-summarization
|
vohuutridung
| 2025-08-19T15:24:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:23:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sekirr22/blockassist-bc-furry_rugged_camel_1755616873
|
sekirr22
| 2025-08-19T15:22:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry rugged camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:22:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry rugged camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755616839
|
lqpl
| 2025-08-19T15:22:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:21:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v1
|
concept-unlearning
| 2025-08-19T15:21:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:18:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/vintage-drawing-ce
|
Muapi
| 2025-08-19T15:18:13Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:18:02Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Vintage Drawing - CE

**Base model**: Flux.1 D
**Trained words**: vntgdrwngCE style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:660535@811004", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1755616585
|
zenqqq
| 2025-08-19T15:17:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:17:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755615004
|
lisaozill03
| 2025-08-19T15:15:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:15:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Soughing/gla_xl
|
Soughing
| 2025-08-19T15:15:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-01T17:49:23Z |
---
license: apache-2.0
---
|
kodetr/stunting-7B-Qwen
|
kodetr
| 2025-08-19T15:15:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"stunting",
"kesehatan",
"anak",
"conversational",
"id",
"dataset:kodetr/penelitian-fundamental-stunting-qa",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:finetune:Qwen/Qwen1.5-7B-Chat",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:59:41Z |
---
library_name: transformers
tags:
- stunting
- kesehatan
- anak
license: apache-2.0
datasets:
- kodetr/penelitian-fundamental-stunting-qa
language:
- id
metrics:
- rouge
- bleu
pipeline_tag: text-generation
base_model:
- Qwen/Qwen1.5-7B-Chat
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
Konsultasi(Q&A) stunting pada anak
- **Developed by:** Tanwir
- **Language :** Indonesia
### Training

### Use with transformers
Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer.
```python
import torch
from transformers import pipeline
model_id = "kodetr/stunting-7B-Qwen"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."},
{"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
|
Muapi/flux-christmas-living-room
|
Muapi
| 2025-08-19T15:14:26Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:14:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# FLUX Christmas living room

**Base model**: Flux.1 D
**Trained words**: christmas living room
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1011849@1134274", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/cogito-v2-preview-llama-405B-GGUF
|
mradermacher
| 2025-08-19T15:14:16Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"base_model:deepcogito/cogito-v2-preview-llama-405B",
"base_model:finetune:deepcogito/cogito-v2-preview-llama-405B",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | 2025-08-02T00:32:16Z |
---
base_model: deepcogito/cogito-v2-preview-llama-405B
language:
- en
library_name: transformers
license: llama3.1
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/deepcogito/cogito-v2-preview-llama-405B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#cogito-v2-preview-llama-405B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q2_K.gguf.part4of4) | Q2_K | 149.4 | |
| [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_S.gguf.part4of4) | Q3_K_S | 175.3 | |
| [PART 1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_M.gguf.part4of4) | Q3_K_M | 195.5 | lower quality |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q3_K_L.gguf.part5of5) | Q3_K_L | 212.9 | |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.IQ4_XS.gguf.part5of5) | IQ4_XS | 218.7 | |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_S.gguf.part5of5) | Q4_K_S | 230.6 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q4_K_M.gguf.part5of5) | Q4_K_M | 243.2 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_S.gguf.part6of6) | Q5_K_S | 279.4 | |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q5_K_M.gguf.part6of6) | Q5_K_M | 286.7 | |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part1of7) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part2of7) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part3of7) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part4of7) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part5of7) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part6of7) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q6_K.gguf.part7of7) | Q6_K | 333.0 | very good quality |
| [P1](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part1of9) [P2](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part2of9) [P3](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part3of9) [P4](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part4of9) [P5](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part5of9) [P6](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part6of9) [P7](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part7of9) [P8](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part8of9) [P9](https://huggingface.co/mradermacher/cogito-v2-preview-llama-405B-GGUF/resolve/main/cogito-v2-preview-llama-405B.Q8_0.gguf.part9of9) | Q8_0 | 431.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Muapi/ps1-style-flux
|
Muapi
| 2025-08-19T15:11:21Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:11:09Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# PS1 Style Flux

**Base model**: Flux.1 D
**Trained words**: ps1
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:648058@725031", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755616186
|
2hpsatt
| 2025-08-19T15:10:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:10:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/3d_flux-style
|
Muapi
| 2025-08-19T15:07:43Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:07:35Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 3D_Flux Style

**Base model**: Flux.1 D
**Trained words**: 3D01S , kawaii, anime
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:689478@771650", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Kurosawama/gemma-3-1b-it-Inference-align
|
Kurosawama
| 2025-08-19T15:04:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:04:35Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rbelanec/train_svamp_1755615499
|
rbelanec
| 2025-08-19T15:03:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-19T14:58:45Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_svamp_1755615499
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1755615499
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1893
- Num Input Tokens Seen: 705184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 0.7697 | 0.5 | 79 | 0.6681 | 35776 |
| 0.5968 | 1.0 | 158 | 0.5173 | 70672 |
| 0.1124 | 1.5 | 237 | 0.1794 | 105904 |
| 0.132 | 2.0 | 316 | 0.1370 | 141328 |
| 0.1259 | 2.5 | 395 | 0.1006 | 176752 |
| 0.0482 | 3.0 | 474 | 0.0846 | 211808 |
| 0.0378 | 3.5 | 553 | 0.1207 | 247104 |
| 0.0761 | 4.0 | 632 | 0.0935 | 282048 |
| 0.0108 | 4.5 | 711 | 0.1449 | 317248 |
| 0.0208 | 5.0 | 790 | 0.1160 | 352592 |
| 0.0152 | 5.5 | 869 | 0.1450 | 388176 |
| 0.0132 | 6.0 | 948 | 0.1488 | 423184 |
| 0.0151 | 6.5 | 1027 | 0.1474 | 458640 |
| 0.0004 | 7.0 | 1106 | 0.1693 | 493440 |
| 0.0006 | 7.5 | 1185 | 0.1817 | 528768 |
| 0.0001 | 8.0 | 1264 | 0.1838 | 563872 |
| 0.0 | 8.5 | 1343 | 0.1869 | 599232 |
| 0.0002 | 9.0 | 1422 | 0.1876 | 634544 |
| 0.0004 | 9.5 | 1501 | 0.1893 | 670064 |
| 0.0001 | 10.0 | 1580 | 0.1893 | 705184 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
unitova/blockassist-bc-zealous_sneaky_raven_1755614105
|
unitova
| 2025-08-19T15:03:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:03:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/gigachad-flux1.d-sdxl
|
Muapi
| 2025-08-19T15:03:05Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T15:02:54Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Gigachad - Flux1.D & SDXL

**Base model**: Flux.1 D
**Trained words**: Gigachad is a muscular man
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:237712@786259", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
DurstewitzLab/dynamix-3d-v1.0
|
DurstewitzLab
| 2025-08-19T15:02:25Z | 0 | 1 | null |
[
"dynamix",
"time-series-forecasting",
"dataset:williamgilpin/dysts",
"arxiv:2505.13192",
"license:mit",
"region:us"
] |
time-series-forecasting
| 2025-08-19T13:37:35Z |
---
license: mit
pipeline_tag: time-series-forecasting
datasets:
- williamgilpin/dysts
---
# DynaMix-3D v1.0
DynaMix is a foundation model for zero-shot inference of dynamical systems that preserves long-term statistics. Unlike traditional approaches that require retraining for each new system, DynaMix generalizes across dynamical systems by learning universal representations that capture the underlying patterns governing temporal evolution.
- **Accurate Zero-Shot DSR**: DynaMix generalizes across diverse dynamical systems without fine-tuning, accurately capturing attractor geometry and long-term statistics.
- **Context Felxible Dynamics Modeling**: The multivariate architecture captures dependencies across system dimensions and adapts flexibly to different dimensionalities and context lengths.
- **Efficient and Lightweight**: Designed to be efficient, DynaMix can run on CPU for inference, enabling orders-of-magnitude faster inference than traditional foundation models.
- **Interpretable Dynamics**: Provides insights into the structure of reconstructed systems, revealing similarities across different dynamical systems.
- **General Time Series Forecasting**: Extends beyond DSR to general time series forecasting using adaptable embedding techniques.
The paper can be found here:
[](https://arxiv.org/abs/2505.13192)
## Model Description
DynaMix is based on a sparse mixture of experts (MoE) architecture operating in latent space:
1. **Expert Networks**: Each expert is a specialized dynamical model, given through Almost-Linear Recurrent Neural Networks
2. **Gating Network**: Selects experts based on the provided context and current latent representation of the dynamics
By aggregating the expert weighting with the expert prediction $z_t^i$ the next state is predicted. The model is lightweight (~10K parameters), making it orders-of-magnitude faster than traditional approaches while maintaining high accuracy in reconstructing complex dynamics.
## Usage
To procuce predictions the model inputs a **Context tensor** as numpy array of shape `(T_C, S, N)` (where `T_C` is the context length, `S` the number of sewuences that should get processed and `N` the data dimensionality). The output is provided as **Reconstruction tensor** of shape `(T, S, N)` (where `T` is the predictions length)
To load the model in python use:
```python
import torch
# Load the model
model = torch.load("dynamix-3d-v1.0.safetensors")
```
Inference using python is done via the prediction pieline:
```python
import torch
from src.model.model_utilities import DynaMix_forecasting_pipeline
# Make prediction
with torch.no_grad(): # No gradient tracking needed for inference
reconstruction = DynaMix_forecasting_pipeline(
model=model,
context=context_tensor,
T=prediction_length,
preprocessing_method="delay_embedding",
standardize=True,
)
```
The forecasting pipeline requires the following inputs:
- *model*: DynaMix foundation model. Model can be loaded using the `load_model` function from `src.utilities.utilities`.
- *context*: Context data in the form of a tensor with shape ($T_C$, $S$, $N$)
- *T*: Forecast horizon, i.e. an integer specifying how many future steps to forecast
Optional arguments:
- *preprocessing_method*: for time series forecasting, choose between `pos_embedding`, `delay_embedding`, `delay_embedding_random` and `zero_embedding` as preprocessing method (default: `zero_embedding`)
- *standardize*: standardize data? `True`/`False` (default: `False`)
- *initial_x*: Optional initial condition for the model as tensor of shape ($S$, $N$), else last context value is used (default: `None`)
## Citation
If you use DynaMix in your research, please cite our paper:
```
@misc{hemmer2025truezeroshotinferencedynamical,
title={True Zero-Shot Inference of Dynamical Systems Preserving Long-Term Statistics},
author={Christoph Jürgen Hemmer and Daniel Durstewitz},
year={2025},
eprint={2505.13192},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.13192},
}
```
For complete documentation and code, visit the [GitHub repository](https://github.com/yourusername/zero-shot-DSR).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755615679
|
2hpsatt
| 2025-08-19T15:02:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:01:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755614041
|
helmutsukocok
| 2025-08-19T15:01:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T15:01:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
climb-mao/spanish-babylm-urop-shivan
|
climb-mao
| 2025-08-19T15:01:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:07:13Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: spanish-babylm-urop-shivan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-babylm-urop-shivan
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4505 | 1.0 | 2267 | 4.0353 |
| 3.8921 | 2.0 | 4534 | 3.7753 |
| 3.7193 | 3.0 | 6801 | 3.6895 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0.dev20250610+cu118
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kiethuynhanh/gemma-3-1b-it-unsloth-bnb-4bit-legal-vn
|
kiethuynhanh
| 2025-08-19T15:01:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:57:37Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kiethuynhanh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WenFengg/21_14l1_19_8_
|
WenFengg
| 2025-08-19T14:59:23Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T14:42:18Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Bczerw/katex
|
Bczerw
| 2025-08-19T14:58:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T14:53:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Katex
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Bczerw/katex/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Bczerw/katex', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Bczerw/katex/discussions) to add images that show off what you’ve made with this LoRA.
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755615403
|
yaelahnal
| 2025-08-19T14:57:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:57:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1755613952
|
michaelcpage345
| 2025-08-19T14:57:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:57:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/imax-70mm-cinematic-film-style-f1d-xl-sd1.5
|
Muapi
| 2025-08-19T14:57:36Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T14:57:27Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# IMAX 70mm cinematic film style F1D + XL + SD1.5

**Base model**: Flux.1 D
**Trained words**: cinematic film style, IMAX70mm , filmstrip border
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1249970@1409079", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
fengpeisheng1/mergekit-slerp-zhlbqbl
|
fengpeisheng1
| 2025-08-19T14:57:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:fengpeisheng1/mergekit-slerp-ariyvyf",
"base_model:merge:fengpeisheng1/mergekit-slerp-ariyvyf",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"base_model:merge:maywell/Qwen2-7B-Multilingual-RP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:51:11Z |
---
base_model:
- maywell/Qwen2-7B-Multilingual-RP
- fengpeisheng1/mergekit-slerp-ariyvyf
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [maywell/Qwen2-7B-Multilingual-RP](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP)
* [fengpeisheng1/mergekit-slerp-ariyvyf](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: maywell/Qwen2-7B-Multilingual-RP
layer_range: [0,28]
- model: fengpeisheng1/mergekit-slerp-ariyvyf
layer_range: [0,28]
merge_method: slerp
base_model: maywell/Qwen2-7B-Multilingual-RP
parameters:
t:
- filter: self_attn
value: [0, 0.3, 0.5, 0.7, 1]
- filter: mlp
value: [1, 0.7, 0.5, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF
|
tensorblock
| 2025-08-19T14:57:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"image-text-to-text",
"base_model:mlabonne/gemma-3-12b-it-qat-abliterated",
"base_model:quantized:mlabonne/gemma-3-12b-it-qat-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-08-19T12:47:25Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
base_model: mlabonne/gemma-3-12b-it-qat-abliterated
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## mlabonne/gemma-3-12b-it-qat-abliterated - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [mlabonne/gemma-3-12b-it-qat-abliterated](https://huggingface.co/mlabonne/gemma-3-12b-it-qat-abliterated).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<bos><start_of_turn>user
{system_prompt}
{prompt}<end_of_turn>
<start_of_turn>model
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gemma-3-12b-it-qat-abliterated-Q2_K.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q2_K.gguf) | Q2_K | 4.768 GB | smallest, significant quality loss - not recommended for most purposes |
| [gemma-3-12b-it-qat-abliterated-Q3_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q3_K_S.gguf) | Q3_K_S | 5.458 GB | very small, high quality loss |
| [gemma-3-12b-it-qat-abliterated-Q3_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q3_K_M.gguf) | Q3_K_M | 6.009 GB | very small, high quality loss |
| [gemma-3-12b-it-qat-abliterated-Q3_K_L.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q3_K_L.gguf) | Q3_K_L | 6.480 GB | small, substantial quality loss |
| [gemma-3-12b-it-qat-abliterated-Q4_0.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q4_0.gguf) | Q4_0 | 6.887 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gemma-3-12b-it-qat-abliterated-Q4_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q4_K_S.gguf) | Q4_K_S | 6.935 GB | small, greater quality loss |
| [gemma-3-12b-it-qat-abliterated-Q4_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q4_K_M.gguf) | Q4_K_M | 7.301 GB | medium, balanced quality - recommended |
| [gemma-3-12b-it-qat-abliterated-Q5_0.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q5_0.gguf) | Q5_0 | 8.232 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gemma-3-12b-it-qat-abliterated-Q5_K_S.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q5_K_S.gguf) | Q5_K_S | 8.232 GB | large, low quality loss - recommended |
| [gemma-3-12b-it-qat-abliterated-Q5_K_M.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q5_K_M.gguf) | Q5_K_M | 8.445 GB | large, very low quality loss - recommended |
| [gemma-3-12b-it-qat-abliterated-Q6_K.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q6_K.gguf) | Q6_K | 9.661 GB | very large, extremely low quality loss |
| [gemma-3-12b-it-qat-abliterated-Q8_0.gguf](https://huggingface.co/tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF/blob/main/gemma-3-12b-it-qat-abliterated-Q8_0.gguf) | Q8_0 | 12.510 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF --include "gemma-3-12b-it-qat-abliterated-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/mlabonne_gemma-3-12b-it-qat-abliterated-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755615379
|
Vasya777
| 2025-08-19T14:57:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T14:56:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/tifa-lockhart-ffviir
|
Muapi
| 2025-08-19T14:56:12Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T14:55:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Tifa Lockhart (FFVIIR)

**Base model**: Flux.1 D
**Trained words**: TifaLockhart, croptop, skirt, suspenders, fingerless gloves
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:661363@740105", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
pasithbas159/Typhoon2_HII_satellite_v2
|
pasithbas159
| 2025-08-19T14:55:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T17:59:19Z |
---
base_model: pasithbas/typhoon2-qwen2vl-7b-vision-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** pasithbas/typhoon2-qwen2vl-7b-vision-instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
matheoqtb/EuroBertV2180M_pairs
|
matheoqtb
| 2025-08-19T14:55:16Z | 0 | 0 | null |
[
"safetensors",
"eurobert",
"custom_code",
"region:us"
] | null | 2025-08-19T14:55:03Z |
# Checkpoint exporté: 180M_pairs
Ce dépôt contient un checkpoint extrait de `matheoqtb/euroBertV2_test2` (sous-dossier `180M_pairs`) et les fichiers de code nécessaires provenant de `EuroBERT/EuroBERT-610m`.
Chargement:
from transformers import AutoTokenizer, AutoModel
tok = AutoTokenizer.from_pretrained('<THIS_REPO>', trust_remote_code=True)
mdl = AutoModel.from_pretrained('<THIS_REPO>', trust_remote_code=True)
Tâche: feature-extraction (embeddings)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.