modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-14 18:29:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 557
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-14 18:28:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/MathTutor-7B-MDPO_v0.1-GGUF
|
mradermacher
| 2025-08-20T14:53:22Z | 68 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Sandesh2027/MathTutor-7B-MDPO_v0.1",
"base_model:quantized:Sandesh2027/MathTutor-7B-MDPO_v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-10T01:32:24Z |
---
base_model: Sandesh2027/MathTutor-7B-MDPO_v0.1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sandesh2027/MathTutor-7B-MDPO_v0.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MathTutor-7B-MDPO_v0.1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-MDPO_v0.1-GGUF/resolve/main/MathTutor-7B-MDPO_v0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755699960
|
thanobidex
| 2025-08-20T14:52:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:52:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
urbainze/llama-3-8b-Instruct-fr
|
urbainze
| 2025-08-20T14:51:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:48:51Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** urbainze
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755699680
|
mang3dd
| 2025-08-20T14:47:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:47:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abishekcodes/bert-new-ner
|
abishekcodes
| 2025-08-20T14:47:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-19T19:42:53Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: bert-new-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-new-ner
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
- Precision: 0.9645
- Recall: 0.9682
- F1: 0.9664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0227 | 1.0 | 1002 | 0.0263 | 0.9540 | 0.9614 | 0.9577 |
| 0.0125 | 2.0 | 2004 | 0.0237 | 0.9554 | 0.9720 | 0.9637 |
| 0.0064 | 3.0 | 3006 | 0.0246 | 0.9645 | 0.9682 | 0.9664 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.4
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755701071
|
liukevin666
| 2025-08-20T14:46:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:45:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755699518
|
indoempatnol
| 2025-08-20T14:45:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:45:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755701024
|
lilTAT
| 2025-08-20T14:44:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:44:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dileepsathyan/my_awesome_qa_model
|
dileepsathyan
| 2025-08-20T14:42:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-20T14:31:08Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3540 |
| 2.6485 | 2.0 | 500 | 1.7377 |
| 2.6485 | 3.0 | 750 | 1.7156 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755699230
|
kojeklollipop
| 2025-08-20T14:40:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:40:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755699160
|
vwzyrraz7l
| 2025-08-20T14:40:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:40:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aputze/Whispr
|
aputze
| 2025-08-20T14:40:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T14:15:11Z |
---
title: Whispr
emoji: 🎤
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 5.43.1
app_file: app.py
pinned: false
---
# Whispr - Audio Transcription
Audio transcription using OpenAI's Whisper model through faster-whisper.
## Features
- Audio file upload and microphone recording
- Multiple model sizes (tiny to large)
- Optimized for Hebrew speech
- Real-time transcription with progress indicators
|
steinunnfridriks/mBERTBiasIce
|
steinunnfridriks
| 2025-08-20T14:38:01Z | 6 | 0 | null |
[
"safetensors",
"bert",
"bias-detection",
"icelandic",
"ner",
"socially-responsible-ai",
"prejudice-detection",
"huggingface",
"transformer",
"is",
"dataset:IceBiasNER",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-15T18:02:40Z |
---
license: bigscience-openrail-m
language:
- is
tags:
- bias-detection
- icelandic
- ner
- socially-responsible-ai
- prejudice-detection
- huggingface
- transformer
datasets:
- IceBiasNER
widget:
- text: "Þetta helvítis útlenska pakk..."
---
# mBERT Bias-Aware NER (Icelandic)
**Trigger warning:** This model detects biased, offensive, or harmful language. Examples in this card may contain such language, included solely for research purposes.
## Model Description
This is a fine-tuned version of **mBERT** for Named Entity Recognition (NER) to identify biased and potentially harmful expressions in Icelandic text.
It was trained on automatically annotated sentences covering multiple social bias categories. The covered classes are the following:
- **B-ADDICTION, I-ADDICTION**
- **B-DISABILITY, I-DISABILITY**
- **B-ORIGIN, I-ORIGIN**
- **B-GENERAL, I-GENERAL**
- **B-LGBTQIA, I-LGBTQIA**
- **B-LOOKS, I-LOOKS**
- **B-PERSONAL, I-PERSONAL**
- **B-PROFANITY, I-PROFANITY**
- **B-RELIGION, I-RELIGION**
- **B-SEXUAL, I-SEXUAL**
- **B-SOCIAL_STATUS, I-SOCIAL_STATUS**
- **B-STUPIDITY, I-STUPIDITY**
- **B-VULGAR, I-VULGAR**
- **B-WOMEN, I-WOMEN**
The model flags words or phrases belonging to these categories, producing BIO tags (e.g., `B-WOMEN`, `I-WOMEN`, `O`).
## Intended Uses & Limitations
### Intended Use
- Research on bias detection in low-resource languages
- Educational tools for raising awareness of bias in language
- Civic engagement platforms encouraging inclusive language
### Limitations
- Vocabulary-based weak supervision means some bias forms may be missed
- No sentence-level or discourse-level interpretation
- Mislabeling possible in critical, reclaimed, or journalistic contexts
⚠ **Not intended for punitive monitoring or censorship.** Outputs are prompts for reflection, not judgments.
## Performance
**Evaluation datasets:**
- **Test set**: 15,383 automatically annotated sentences (silver data)
- **Gold set**: 190 manually reviewed sentences
**Macro F1 performance highlights:**
- Test set: 0.972 (CI: 0.972-0.973)
- Gold set: 0.846 (CI: 0.845-0.848)
## Relevant Information
- **Base model**: [mBERT](https://huggingface.co/google-bert/bert-base-multilingual-cased)
- **Data source**: [IceBiasNER](https://huggingface.co/datasets/steinunnfridriks/IceBiasNER)
## Ethical Considerations
This model is released under the **[BigScience OpenRAIL-M License](https://www.licenses.ai/ai-licenses)**, which allows free use with responsible-use restrictions.
Prohibited uses include:
- Harassment or discrimination
- Generating disinformation or hateful content
- Surveillance targeting individuals or groups
## Citation
Will be updated.
```
|
loyal-misc/myst
|
loyal-misc
| 2025-08-20T14:36:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:unlicense",
"region:us"
] |
text-to-image
| 2025-08-20T12:10:35Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/myst.png
text: '-'
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: myst, scalie, female
license: unlicense
---
# myst
<Gallery />
## Trigger words
You should use `myst` to trigger the image generation.
You should use `scalie` to trigger the image generation.
You should use `female` to trigger the image generation.
## Download model
[Download](/loyal-misc/myst/tree/main) them in the Files & versions tab.
|
Henit007/Vivekanandao1_finetuned
|
Henit007
| 2025-08-20T14:36:03Z | 182 | 0 | null |
[
"tensorboard",
"llama",
"region:us"
] | null | 2025-08-08T12:28:28Z |
# 🧠 Fine-tuned LLaMA Model using QLoRA & LoRA (Supervised Fine-Tuning)
This model is a fine-tuned version of the `model_name` base model using **QLoRA (Quantized Low-Rank Adaptation)** for efficient and memory-friendly training. Fine-tuning was performed using the Hugging Face `trl` library’s `SFTTrainer` and `peft` (LoRA).
---
## 📌 Model Overview
- **Base Model**: `model_name`
- **Fine-tuning Method**: QLoRA + LoRA (PEFT)
- **Task**: Causal Language Modeling
- **Quantization**: 4-bit (bitsandbytes)
- **Frameworks**: Transformers, PEFT, TRL
---
## 🧠 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henit007/Vivekanandao1_finetuned")
model = AutoModelForCausalLM.from_pretrained("Henit007/Vivekanandao1_finetuned", device_map="auto")
input_text = "Explain climate change in simple terms."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
|
joanna302
| 2025-08-20T14:36:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:22:02Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_8e-05/runs/daig9xq6)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755700436
|
yaelahnal
| 2025-08-20T14:35:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:34:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen3_Medical_GRPO-GGUF
|
mradermacher
| 2025-08-20T14:35:05Z | 352 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"medical",
"en",
"zh",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:lastmass/medical-o1-reasoning-SFT-keywords",
"base_model:lastmass/Qwen3_Medical_GRPO",
"base_model:quantized:lastmass/Qwen3_Medical_GRPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-24T15:58:10Z |
---
base_model: lastmass/Qwen3_Medical_GRPO
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- lastmass/medical-o1-reasoning-SFT-keywords
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lastmass/Qwen3_Medical_GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3_Medical_GRPO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:34:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:24:38Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002/runs/l27wsth5)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tusharmagar/FLUX.1-Krea-dev-LoRA-Solarpunk
|
tusharmagar
| 2025-08-20T14:33:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"en",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-20T12:09:07Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
widget:
- text: >-
Solarpunk London with hexagonal solar panels and white architecture while
keeping traditional Parisian architecture with greenery and flowers and
fruiting trees and the Big Ben (unchanged) and a double decker bus on the
road [SLRPNK]
output:
url: images/London_Solarpunk.jpg
- text: >-
Aerial view of Solarpunk San Francisco with futuristic townhouses
architecture and solar sails while keeping the Golden Gate Bridge
(unchanged) a futuristic Sutro tower, flowers, and fruiting trees flowing
through hilly neighbourhoods, with a road cable car gliding along the
streets [SLRPNK]
output:
url: images/SanFrancisco_Solarpunk.jpg
- text: >-
Solarpunk Masai Mara tribe with solar panel dome greenhouses and separate
white mud houses, with flowers and fruiting trees, masai people, with a few
giraffes and elephants [SLRPNK]
output:
url: images/MasaiMara_Solarpunk.jpg
- text: >-
Solarpunk Rio de Janeiro with tropical solar sails shaped like leaves lining
the beaches, while keeping Christ the Redeemer (unchanged), flowers and
fruiting trees cascading through favelas, and futuristic white towers rising
along Copacabana [SLRPNK]
output:
url: images/Rio_Solarpunk.jpg
- text: >-
Solarpunk Santorini with blue-domed houses fitted with crystal roofs, while
keeping the traditional cliffside churches (unchanged), grapevines and
fruiting olive trees cascading across terraces, and massive futuristic on
water wind energy sails [SLRPNK]
output:
url: images/Santorini_Solarpunk.jpg
- text: >-
Solarpunk Varanasi with floating solar lotus platforms spread across the
Ganges River, while keeping the ghats and ancient temples (unchanged),
greenery, flowers, and fruiting trees cascading down the steps, with
bioluminescent lamps powered by algae lining the riverbanks, and futuristic
white riverboats gliding silently past ceremonies on the water [SLRPNK]
output:
url: images/Varanasi_Solarpunk.jpg
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: '[SLRPNK]'
license: mit
pipeline_tag: text-to-image
language:
- en
---
# flux1 krea dev lora solarpunk
<Gallery />
## Model description
This repository contains the LoRA adapter for FLUX.1-Krea [dev], fine-tuned using https://fal.ai/models/fal-ai/flux-krea-trainer
with curated Solarpunk-style images.
This LoRA excels at creating solarpunk imagintations of real world cities in a dreamy style! I personally feel it performs better than midjourney and any other text-to-image model 👀
The dataset was assembled for the Solarpunk Art Contest 2025 by Yishan, featuring a wide range of environments, architecture, and character scenes inspired by solarpunk aesthetics.
### Prompt Template
You should use the following template (defined when annotating the images with captions) to trigger solarpunk image generation:
"Solarpunk [city or setting] with [distinctive future-tech feature], [architecture or landmark (unchanged if historic)], [greenery and fruiting trees/flowers], [people or activity], [lighting or atmosphere], [additional details]"
## Trigger words
You should use `[SLRPNK]` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/tusharmagar/flux1-krea-dev-lora-solarpunk/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-krea-trainer](https://fal.ai/models/fal-ai/flux-krea-trainer).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755700300
|
lilTAT
| 2025-08-20T14:32:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:32:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
M8eda/M8eda-Bot
|
M8eda
| 2025-08-20T14:32:32Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"ar",
"base_model:Qwen/Qwen3-Coder-480B-A35B-Instruct",
"base_model:finetune:Qwen/Qwen3-Coder-480B-A35B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:22:36Z |
---
license: apache-2.0
language:
- en
- ar
base_model:
- Qwen/Qwen3-Coder-480B-A35B-Instruct
pipeline_tag: text-generation
library_name: transformers
---
|
joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:32:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T12:54:54Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_1_part_SFT_0.0002
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_1_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_1_part_SFT_0.0002/runs/z50mdz7k)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mubashiross/llava
|
mubashiross
| 2025-08-20T14:30:21Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T10:41:22Z |
---
license: apache-2.0
---
|
youuotty/blockassist-bc-furry_reptilian_flamingo_1755700198
|
youuotty
| 2025-08-20T14:30:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry reptilian flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:29:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry reptilian flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755700093
|
roeker
| 2025-08-20T14:29:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:29:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sai9390/age_predictor2
|
sai9390
| 2025-08-20T14:29:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T14:29:11Z |
---
license: apache-2.0
---
|
tampocolapavada/flux-lora-agustinln
|
tampocolapavada
| 2025-08-20T14:29:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-20T14:19:30Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/papa.png
text: '-'
- output:
url: images/SNL.png
text: '-'
- output:
url: images/balcon1.png
text: '-'
- output:
url: images/ciclismo.png
text: '-'
- output:
url: images/glaciar.png
text: '-'
- output:
url: images/papa.png
text: '-'
- output:
url: images/SNL.png
text: '-'
- output:
url: images/ciclismo.png
text: '-'
- output:
url: images/balcon1.png
text: '-'
- output:
url: images/image.webp
text: '-'
- output:
url: images/glaciar.png
text: '-'
- output:
url: images/papa.png
text: '-'
- output:
url: images/balcon1.png
text: '-'
- output:
url: images/ciclismo.png
text: '-'
- output:
url: images/glaciar.png
text: '-'
- output:
url: images/image.webp
text: '-'
- output:
url: images/papa.png
text: '-'
- output:
url: images/SNL.png
text: '-'
- output:
url: images/balcon1.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: AALN
license: apache-2.0
pipeline_tag: text-to-image
---
# Flux Lora AgustinLN
<Gallery />
## Model description
Flux Lora trained on pictures of me.
## Trigger words
You should use `AALN` to trigger the image generation.
## Download model
[Download](/tampocolapavada/flux-lora-agustinln/tree/main) them in the Files & versions tab.
|
pobiiiiiii/blockassist-bc-ravenous_yapping_ferret_1755700099
|
pobiiiiiii
| 2025-08-20T14:29:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous yapping ferret",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous yapping ferret
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdagsadgd/blockassist-bc-sedate_squeaky_salamander_1755696899
|
sdagsadgd
| 2025-08-20T14:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
razor534/blockassist-bc-lazy_extinct_termite_1755700056
|
razor534
| 2025-08-20T14:28:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755700007
|
lilTAT
| 2025-08-20T14:27:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:27:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Voxtral-Mini-3B-2507-i1-GGUF
|
mradermacher
| 2025-08-20T14:27:18Z | 1,053 | 2 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"en",
"fr",
"de",
"es",
"it",
"pt",
"nl",
"hi",
"base_model:mistralai/Voxtral-Mini-3B-2507",
"base_model:quantized:mistralai/Voxtral-Mini-3B-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-29T11:29:17Z |
---
base_model: mistralai/Voxtral-Mini-3B-2507
language:
- en
- fr
- de
- es
- it
- pt
- nl
- hi
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- vllm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mistralai/Voxtral-Mini-3B-2507
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Voxtral-Mini-3B-2507-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ2_S.gguf) | i1-IQ2_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ2_M.gguf) | i1-IQ2_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Voxtral-Mini-3B-2507-i1-GGUF/resolve/main/Voxtral-Mini-3B-2507.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755698286
|
katanyasekolah
| 2025-08-20T14:27:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:27:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:26:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:53:18Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002/runs/59bgfy7v)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755698346
|
lisaozill03
| 2025-08-20T14:25:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:25:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AngelinaZanardi/nb-bert-base-edu-scorer-lr3e4-bs32-swe
|
AngelinaZanardi
| 2025-08-20T14:25:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NbAiLab/nb-bert-base",
"base_model:finetune:NbAiLab/nb-bert-base",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T13:23:42Z |
---
library_name: transformers
license: cc-by-4.0
base_model: NbAiLab/nb-bert-base
tags:
- generated_from_trainer
model-index:
- name: nb-bert-base-edu-scorer-lr3e4-bs32-swe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-edu-scorer-lr3e4-bs32-swe
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7996
- Mse: 0.7996
- Mae: 0.6982
- Rmse: 0.8942
- R2: 0.5844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | Rmse | R2 |
|:-------------:|:-------:|:-----:|:---------------:|:------:|:------:|:------:|:-------:|
| No log | 0 | 0 | 6.0700 | 6.0700 | 2.0111 | 2.4637 | -2.0534 |
| 1.1272 | 0.3397 | 1000 | 1.0319 | 1.0319 | 0.7925 | 1.0158 | 0.4809 |
| 1.0837 | 0.6793 | 2000 | 1.0182 | 1.0182 | 0.7850 | 1.0091 | 0.4878 |
| 1.0446 | 1.0190 | 3000 | 0.9967 | 0.9967 | 0.7683 | 0.9983 | 0.4986 |
| 1.0863 | 1.3587 | 4000 | 0.9580 | 0.9580 | 0.7534 | 0.9788 | 0.5181 |
| 1.0601 | 1.6984 | 5000 | 1.0061 | 1.0061 | 0.7796 | 1.0030 | 0.4939 |
| 0.9957 | 2.0380 | 6000 | 1.3005 | 1.3005 | 0.8945 | 1.1404 | 0.3458 |
| 1.0104 | 2.3777 | 7000 | 0.9569 | 0.9569 | 0.7483 | 0.9782 | 0.5187 |
| 1.04 | 2.7174 | 8000 | 0.9457 | 0.9457 | 0.7648 | 0.9724 | 0.5243 |
| 1.0445 | 3.0571 | 9000 | 0.9641 | 0.9641 | 0.7445 | 0.9819 | 0.5150 |
| 0.9931 | 3.3967 | 10000 | 0.9549 | 0.9549 | 0.7430 | 0.9772 | 0.5197 |
| 1.0134 | 3.7364 | 11000 | 0.9791 | 0.9791 | 0.7549 | 0.9895 | 0.5075 |
| 1.0366 | 4.0761 | 12000 | 1.0248 | 1.0248 | 0.7673 | 1.0123 | 0.4845 |
| 1.0106 | 4.4158 | 13000 | 0.9321 | 0.9321 | 0.7378 | 0.9654 | 0.5311 |
| 0.9409 | 4.7554 | 14000 | 0.9553 | 0.9553 | 0.7420 | 0.9774 | 0.5194 |
| 0.925 | 5.0951 | 15000 | 1.1885 | 1.1885 | 0.8538 | 1.0902 | 0.4021 |
| 0.961 | 5.4348 | 16000 | 0.9201 | 0.9201 | 0.7341 | 0.9592 | 0.5372 |
| 1.0096 | 5.7745 | 17000 | 0.9192 | 0.9192 | 0.7448 | 0.9587 | 0.5376 |
| 0.9696 | 6.1141 | 18000 | 0.9543 | 0.9543 | 0.7445 | 0.9769 | 0.5199 |
| 0.9737 | 6.4538 | 19000 | 0.9287 | 0.9287 | 0.7281 | 0.9637 | 0.5328 |
| 0.9725 | 6.7935 | 20000 | 0.9589 | 0.9589 | 0.7557 | 0.9792 | 0.5176 |
| 0.9683 | 7.1332 | 21000 | 0.9079 | 0.9079 | 0.7354 | 0.9528 | 0.5433 |
| 0.9606 | 7.4728 | 22000 | 0.9885 | 0.9885 | 0.7481 | 0.9943 | 0.5027 |
| 0.9846 | 7.8125 | 23000 | 1.0081 | 1.0081 | 0.7895 | 1.0041 | 0.4929 |
| 0.9671 | 8.1522 | 24000 | 0.9174 | 0.9174 | 0.7251 | 0.9578 | 0.5385 |
| 0.9679 | 8.4918 | 25000 | 0.9212 | 0.9212 | 0.7447 | 0.9598 | 0.5366 |
| 0.9503 | 8.8315 | 26000 | 0.9418 | 0.9418 | 0.7343 | 0.9705 | 0.5262 |
| 0.9858 | 9.1712 | 27000 | 0.9186 | 0.9186 | 0.7325 | 0.9584 | 0.5379 |
| 0.969 | 9.5109 | 28000 | 0.9219 | 0.9219 | 0.7352 | 0.9602 | 0.5362 |
| 1.0022 | 9.8505 | 29000 | 0.9458 | 0.9458 | 0.7400 | 0.9725 | 0.5242 |
| 0.942 | 10.1902 | 30000 | 0.9746 | 0.9746 | 0.7416 | 0.9872 | 0.5097 |
| 0.9633 | 10.5299 | 31000 | 0.9173 | 0.9173 | 0.7218 | 0.9577 | 0.5386 |
| 0.9463 | 10.8696 | 32000 | 0.9528 | 0.9528 | 0.7443 | 0.9761 | 0.5207 |
| 0.9803 | 11.2092 | 33000 | 0.9042 | 0.9042 | 0.7226 | 0.9509 | 0.5452 |
| 0.9318 | 11.5489 | 34000 | 0.9030 | 0.9030 | 0.7270 | 0.9502 | 0.5458 |
| 0.9176 | 11.8886 | 35000 | 0.9378 | 0.9378 | 0.7314 | 0.9684 | 0.5283 |
| 0.9063 | 12.2283 | 36000 | 0.8946 | 0.8946 | 0.7191 | 0.9458 | 0.5500 |
| 0.9754 | 12.5679 | 37000 | 0.8938 | 0.8938 | 0.7207 | 0.9454 | 0.5504 |
| 0.9291 | 12.9076 | 38000 | 0.9565 | 0.9565 | 0.7503 | 0.9780 | 0.5188 |
| 0.9142 | 13.2473 | 39000 | 0.9238 | 0.9238 | 0.7278 | 0.9611 | 0.5353 |
| 0.9579 | 13.5870 | 40000 | 0.9267 | 0.9267 | 0.7335 | 0.9627 | 0.5338 |
| 0.9556 | 13.9266 | 41000 | 0.9083 | 0.9083 | 0.7197 | 0.9531 | 0.5431 |
| 0.9465 | 14.2663 | 42000 | 0.9228 | 0.9228 | 0.7287 | 0.9606 | 0.5358 |
| 0.9455 | 14.6060 | 43000 | 0.9122 | 0.9122 | 0.7201 | 0.9551 | 0.5411 |
| 0.9294 | 14.9457 | 44000 | 0.9241 | 0.9241 | 0.7307 | 0.9613 | 0.5351 |
| 0.9038 | 15.2853 | 45000 | 0.8985 | 0.8985 | 0.7229 | 0.9479 | 0.5480 |
| 0.9154 | 15.625 | 46000 | 0.9374 | 0.9374 | 0.7451 | 0.9682 | 0.5285 |
| 0.9482 | 15.9647 | 47000 | 0.9487 | 0.9487 | 0.7413 | 0.9740 | 0.5228 |
| 0.9568 | 16.3043 | 48000 | 0.9006 | 0.9006 | 0.7224 | 0.9490 | 0.5470 |
| 0.9902 | 16.6440 | 49000 | 0.9042 | 0.9042 | 0.7200 | 0.9509 | 0.5451 |
| 0.9364 | 16.9837 | 50000 | 0.9053 | 0.9053 | 0.7263 | 0.9515 | 0.5446 |
| 0.9432 | 17.3234 | 51000 | 0.9139 | 0.9139 | 0.7331 | 0.9560 | 0.5403 |
| 0.9288 | 17.6630 | 52000 | 0.9165 | 0.9165 | 0.7285 | 0.9573 | 0.5390 |
| 0.9385 | 18.0027 | 53000 | 0.9081 | 0.9081 | 0.7243 | 0.9529 | 0.5432 |
| 0.9157 | 18.3424 | 54000 | 0.9449 | 0.9449 | 0.7435 | 0.9720 | 0.5247 |
| 0.9666 | 18.6821 | 55000 | 0.8962 | 0.8962 | 0.7174 | 0.9467 | 0.5492 |
| 0.931 | 19.0217 | 56000 | 0.8971 | 0.8971 | 0.7222 | 0.9471 | 0.5487 |
| 0.96 | 19.3614 | 57000 | 0.8975 | 0.8975 | 0.7230 | 0.9473 | 0.5485 |
| 0.9257 | 19.7011 | 58000 | 0.9041 | 0.9041 | 0.7252 | 0.9508 | 0.5452 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755699736
|
Vasya777
| 2025-08-20T14:23:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:22:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Behzadshomali/16_08_20
|
Behzadshomali
| 2025-08-20T14:23:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Behzadshomali/Teuken3.7B",
"base_model:finetune:Behzadshomali/Teuken3.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:08:55Z |
---
base_model: Behzadshomali/Teuken3.7B
library_name: transformers
model_name: '16_08_20'
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 16_08_20
This model is a fine-tuned version of [Behzadshomali/Teuken3.7B](https://huggingface.co/Behzadshomali/Teuken3.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Behzadshomali/16_08_20", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/behzadshomali/Teuken3.73T_IT_grade-school-math/runs/i9amv9ig)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aivoryinnovations/jay
|
aivoryinnovations
| 2025-08-20T14:21:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-20T13:23:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755699647
|
lilTAT
| 2025-08-20T14:21:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aidan-ucc/LoRA-qwen2.5VL-3B-5200
|
aidan-ucc
| 2025-08-20T14:20:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-20T14:17:00Z |
---
base_model: unsloth/Qwen2.5-VL-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aidan-ucc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755697967
|
ihsanridzi
| 2025-08-20T14:19:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:19:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755699480
|
yaelahnal
| 2025-08-20T14:19:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755699501
|
0xaoyama
| 2025-08-20T14:18:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755697900
|
helmutsukocok
| 2025-08-20T14:18:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-7B-Instruct-airline_2k_v1_tag5_progress
|
lemonhat
| 2025-08-20T14:18:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:16:07Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: airline_2k_v1_tag5_progress
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# airline_2k_v1_tag5_progress
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the airline_2k_v1_tag5_progress dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755698343
|
Sayemahsjn
| 2025-08-20T14:17:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qing223101/blockassist-bc-bellowing_shrewd_tiger_1755697862
|
qing223101
| 2025-08-20T14:16:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bellowing shrewd tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bellowing shrewd tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755697776
|
mang3dd
| 2025-08-20T14:16:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:15:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sirius35/Fintuned-distilbert-NER-for-FinTech
|
Sirius35
| 2025-08-20T14:12:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"ner",
"finance",
"en",
"base_model:dslim/distilbert-NER",
"base_model:finetune:dslim/distilbert-NER",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-18T13:50:04Z |
---
language:
- en
license: apache-2.0
tags:
- token-classification
- ner
- finance
library_name: transformers
pipeline_tag: token-classification
base_model: dslim/distilbert-NER
---
# Finance-Oriented NER Model with MISC Extension
This model is based on [dslim/distilbert-NER](https://huggingface.co/dslim/distilbert-NER) and fine-tuned for **Named Entity Recognition (NER)** with an additional focus on financial domain terminology.
---
## Model Overview
- **Base Model**: [dslim/distilbert-NER](https://huggingface.co/dslim/distilbert-NER)
- **Task Type**: Token Classification / NER
- **Modified Label**: `MISC` — Expanded for more financial-specific terms (e.g., financial instruments, policy names, industry area,financial terminology).
- **Objective**: Extend standard NER to not only capture named entities (such as people, organizations, and locations) but also to recognize domain-specific financial terms that describe events and their potential impacts.
---
## Dataset
- **Source**: 50 news articles collected from public sources.
- **Processing Steps**:
1. News articles were summarized using an abstractive summarization model.
2. Summaries were manually annotated to mark standard entities and the new `MILC` class.
- **Entity Schema**:
- Standard labels: `PER`, `ORG`, `LOC`, `MISC`
- Modified label: `MISC` (financial-specific terms are included)
- Abbreviation and Description
Abbreviation|Description
-|-
O|Outside of a named entity
B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MISC | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
> Note: The `MISC` label is currently a single broad category. More fine-grained classification will be addressed in the next stage.
---
## Eval results
| Metric | Score |
|------------|-------|
| Loss | 0.4912|
| Precision | 0.7010|
| Recall | 0.7967|
| F1 | 0.7458|
| Accuracy | 0.8914|
## Limitation: Overuse of MISC lowers precision
Annotating many generic financial-specific as MISC turns MISC into a broad catch-all class.
This creates a fuzzy decision boundary and the model learns low-specificity rules (“financial-specific tokens → MISC”), which over-predicts MISC, inflates recall, and depresses precision, reducing overall F1.
## Why this happens
MISC becomes a high-frequency, heterogeneous label with weak lexical anchors, conflating named entities with topical vocabulary.
The classifier then favors MISC for many ambiguous tokens, producing systematic false positives and occasional span fragmentation.
## Note on current results (intentional high-recall phase)
The observed precision drop tied to broad MISC usage is largely expected at this stage.
Our near-term objective is to surface domain-specific financial terms that describe entities and their potential impacts, so I intentionally bias for recall and allow MISC to act as a provisional umbrella label.
This high-recall bootstrapping helps collect a candidate lexicon and error patterns for the next iteration.
In subsequent releases, I will narrow MISC, re-annotate with stricter guidelines to recover precision while maintaining coverage by introducing more dedicated labels.
## Usage
```python
from transformers import pipeline
ner_pipe = pipeline("token-classification",
model="Sirius35/Fintuned-distilbert-NER-for-FinTech",
aggregation_strategy="simple")
text = "Citi analysts believe that the Federal Reserve's rate cut will strongly impact the US bond market."
print(ner_pipe(text))
|
unspokenclap/c-Q4_K_M-GGUF
|
unspokenclap
| 2025-08-20T14:09:29Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:unspokenclap/c",
"base_model:quantized:unspokenclap/c",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T13:34:13Z |
---
base_model: unspokenclap/c
tags:
- llama-cpp
- gguf-my-repo
---
# unspokenclap/c-Q4_K_M-GGUF
This model was converted to GGUF format from [`unspokenclap/c`](https://huggingface.co/unspokenclap/c) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unspokenclap/c) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo unspokenclap/c-Q4_K_M-GGUF --hf-file c-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo unspokenclap/c-Q4_K_M-GGUF --hf-file c-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo unspokenclap/c-Q4_K_M-GGUF --hf-file c-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo unspokenclap/c-Q4_K_M-GGUF --hf-file c-q4_k_m.gguf -c 2048
```
```
{{ bos_token }}
{%- set M = messages -%}
{%- if M and M[0]['role'] == 'system' -%}
{# Prepend system text to the first user message #}
{%- set sys = (M[0]['content'] if M[0]['content'] is string
else M[0]['content'][0]['text']) ~ "\n\n" -%}
{%- set M = M[1:] -%}
{%- else -%}
{%- set sys = "" -%}
{%- endif -%}
{%- for m in M -%}
{%- set role = 'model' if m['role'] == 'assistant' else m['role'] -%}
<start_of_turn>{{ role }}
{%- if loop.first -%}{{ sys }}{%- endif -%}
{%- if m['content'] is string -%}
{{ m['content'] | trim }}
{%- else -%}
{%- for item in m['content'] -%}
{%- if item['type'] == 'image' -%}<start_of_image>{%- elif item['type'] == 'text' -%}{{ item['text'] | trim }}{%- endif -%}
{%- endfor -%}
{%- endif -%}<end_of_turn>
{%- endfor -%}
{%- if add_generation_prompt -%}
<start_of_turn>model
{%- endif -%}
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755697080
|
kojeklollipop
| 2025-08-20T14:07:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:07:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755697210
|
hakimjustbao
| 2025-08-20T14:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755698730
|
Vasya777
| 2025-08-20T14:06:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755698712
|
yaelahnal
| 2025-08-20T14:06:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Curiosity-14-GGUF
|
mradermacher
| 2025-08-20T14:06:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"research",
"en",
"base_model:ariankharazmi/Curiosity-14",
"base_model:quantized:ariankharazmi/Curiosity-14",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-16T14:53:28Z |
---
base_model: ariankharazmi/Curiosity-14
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- research
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ariankharazmi/Curiosity-14
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Curiosity-14-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Curiosity-14-GGUF/resolve/main/Curiosity-14.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rembot/Westbot
|
rembot
| 2025-08-20T14:05:35Z | 0 | 0 | null |
[
"en",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T13:56:32Z |
---
license: apache-2.0
language:
- en
base_model:
- microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
|
jasonhuang3
| 2025-08-20T14:02:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T17:39:46Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jasonhuang3-school/huggingface/runs/jcdwzlxa)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.4.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amir-ali-ai/results
|
amir-ali-ai
| 2025-08-20T14:01:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ZharfaTech/ZharfaOpen-0309",
"base_model:finetune:ZharfaTech/ZharfaOpen-0309",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:01:50Z |
---
base_model: ZharfaTech/ZharfaOpen-0309
library_name: transformers
model_name: results
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for results
This model is a fine-tuned version of [ZharfaTech/ZharfaOpen-0309](https://huggingface.co/ZharfaTech/ZharfaOpen-0309).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amir-ali-ai/results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/amirmaasoumi507-amoozesh/huggingface/runs/mw84ybv6)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF
|
Frane92O
| 2025-08-20T14:01:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/OpenReasoning-Nemotron-14B",
"base_model:quantized:nvidia/OpenReasoning-Nemotron-14B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-20T14:01:09Z |
---
license: cc-by-4.0
language:
- en
base_model: nvidia/OpenReasoning-Nemotron-14B
pipeline_tag: text-generation
library_name: transformers
tags:
- nvidia
- code
- llama-cpp
- gguf-my-repo
---
# Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF
This model was converted to GGUF format from [`nvidia/OpenReasoning-Nemotron-14B`](https://huggingface.co/nvidia/OpenReasoning-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/OpenReasoning-Nemotron-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF --hf-file openreasoning-nemotron-14b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF --hf-file openreasoning-nemotron-14b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF --hf-file openreasoning-nemotron-14b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Frane92O/OpenReasoning-Nemotron-14B-Q4_0-GGUF --hf-file openreasoning-nemotron-14b-q4_0.gguf -c 2048
```
|
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755698352
|
Leoar
| 2025-08-20T14:01:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy toothy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:01:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy toothy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Anuar123/A
|
Anuar123
| 2025-08-20T13:59:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T13:59:01Z |
---
license: apache-2.0
---
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755696509
|
coelacanthxyz
| 2025-08-20T13:58:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:58:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palyafari/FeedbackClassifierGemma
|
palyafari
| 2025-08-20T13:58:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T13:34:40Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: FeedbackClassifierGemma
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for FeedbackClassifierGemma
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="palyafari/FeedbackClassifierGemma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755698210
|
Vasya777
| 2025-08-20T13:57:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:57:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755698201
|
yaelahnal
| 2025-08-20T13:57:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:57:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-gl-pt-ctranslate2-android
|
manancode
| 2025-08-20T12:28:42Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:28:33Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gl-pt-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gl-pt` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gl-pt
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
palyafari/MyGemmaNPC
|
palyafari
| 2025-08-20T12:28:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T12:25:22Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="palyafari/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manancode/opus-mt-gil-sv-ctranslate2-android
|
manancode
| 2025-08-20T12:28:08Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:59Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-gil-fi-ctranslate2-android
|
manancode
| 2025-08-20T12:27:46Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:36Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-fi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-fi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-fi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-gil-es-ctranslate2-android
|
manancode
| 2025-08-20T12:27:33Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:23Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-es-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-es` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-es
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-gem-gem-ctranslate2-android
|
manancode
| 2025-08-20T12:27:06Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:26:57Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gem-gem-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gem-gem` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gem-gem
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Youtu-RAG/CoDi-Embedding-V1
|
Youtu-RAG
| 2025-08-20T12:26:50Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"minicpm",
"sentence-similarity",
"custom_code",
"en",
"zh",
"base_model:openbmb/MiniCPM-Embedding",
"base_model:finetune:openbmb/MiniCPM-Embedding",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T12:09:56Z |
---
language:
- en
- zh
base_model:
- openbmb/MiniCPM-Embedding
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
## CoDi-Embedding-V1
CoDi-Embedding-V1 is an outstanding embedding model that supports both Chinese and English retrieval, with particularly exceptional performance in Chinese retrieval. It has achieved SOTA results on the Chinese MTEB benchmark as of August 20, 2025. Based on the [MiniCPM-Embedding](https://huggingface.co/openbmb/MiniCPM-Embedding) model, CoDi-Embedding-V1 extends the maximum sequence length from 512 to 4,196 tokens, significantly enhancing its capability for long-document retrieval. The model employs mean pooling strategy, where tokens from the instruction are excluded during pooling to optimize retrieval effectiveness.
### Model Description
- **Maximum Sequence Length:** 4096 tokens
- **Output Dimensionality:** 2304
- **Model Size:** 2.4B
## Requirements
```
transformers>=4.37.2
```
## Usage
### Sentence Transformers
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer(model_name_or_path)
queries = ["结算业务系统用户使用"]
documents = [
"根据解冻日输入范围,查询出该时间范围内到期的账户冻结列表。",
"智能定期存款到期日为节假日时处理”设置提前或顺延,支持智能定期证实书提前或顺延到期提醒。",
"账户开户时设置了账户到期日,账户到期提醒是根据全机构系统参数设置"
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Get the similarity scores for the embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
```
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_melodic_raven
|
syuvers
| 2025-08-20T12:25:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mangy_melodic_raven",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T12:07:21Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mangy_melodic_raven
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EmilRyd/gpt-oss-ground-truth-60
|
EmilRyd
| 2025-08-20T12:24:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T12:18:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-fr-wls-ctranslate2-android
|
manancode
| 2025-08-20T12:24:16Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:24:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-wls-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-wls` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-wls
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-war-ctranslate2-android
|
manancode
| 2025-08-20T12:24:04Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:54Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-war-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-war` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-war
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-vi-ctranslate2-android
|
manancode
| 2025-08-20T12:23:51Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-vi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-vi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-vi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Vanbitcase/modelqwen
|
Vanbitcase
| 2025-08-20T12:23:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T12:22:04Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vanbitcase
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manancode/opus-mt-fr-uk-ctranslate2-android
|
manancode
| 2025-08-20T12:23:28Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:19Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-uk-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-uk` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-uk
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Milica-y-Angel-David-video/watch-full-original-clip
|
Milica-y-Angel-David-video
| 2025-08-20T12:23:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T12:22:55Z |
<animated-image data-catalyst=""><a href="https://cutt.ly/GrH1tFQs" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
manancode/opus-mt-fr-tvl-ctranslate2-android
|
manancode
| 2025-08-20T12:22:52Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:22:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tvl-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tvl` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tvl
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-ts-ctranslate2-android
|
manancode
| 2025-08-20T12:22:28Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:22:17Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-ts-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ts` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ts
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-tn-ctranslate2-android
|
manancode
| 2025-08-20T12:21:49Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:21:40Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tn-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tn` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tn
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-tl-ctranslate2-android
|
manancode
| 2025-08-20T12:21:25Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:21:16Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tl-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tl` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tl
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Jawaker/t5-base-tcp-new-state
|
Jawaker
| 2025-08-20T12:20:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:47:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fnlp/XY_Tokenizer_TTSD_V0
|
fnlp
| 2025-08-20T12:20:51Z | 0 | 6 | null |
[
"pytorch",
"xy_tokenizer",
"arxiv:2506.23325",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T09:16:30Z |
---
license: apache-2.0
---
# **Introduction**
**`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
- **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325)
- **Source Code:**
- [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD/tree/main/XY_Tokenizer)
- [Hugging Face Repo](https://huggingface.co/spaces/fnlp/MOSS-TTSD/tree/main/XY_Tokenizer)
## 📚 Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)**
**`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \
Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [博客](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD).
## ✨ Features
- **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details
- **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz
- **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss
- **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap
- **Batch processing**: Efficiently process multiple audio files in batches
- **24kHz output**: Generate high-quality 24kHz audio output
## 🚀 Installation
```bash
git clone https://github.com/OpenMOSS/MOSS-TTSD.git
cd MOSS-TTSD
conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
pip install -r XY_Tokenizer/requirements.txt
```
## 💻 Quick Start
Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform.
```python
import torchaudio
from transformers import AutoFeatureExtractor, AutoModel
# 1. Load the feature extractor and the codec model
model_id = "fnlp/XY_Tokenizer_TTSD_V0"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, trust_remote_code=True)
codec = AutoModel.from_pretrained(model_id, trust_remote_code=True).eval().to("cuda")
# 2. Load and preprocess the audio
# The model expects a 16kHz sample rate.
wav_form, sampling_rate = torchaudio.load("examples/m1.wav")
if sampling_rate != 16000:
wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000)
# 3. Encode the audio into discrete codes
input_features = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt")
# The 'code' dictionary contains the discrete audio codes
code = codec.encode(input_features)
print(code)
# 4. Decode the codes back to an audio waveform
# The output is high-quality 24kHz audio.
output_wav = codec.decode(code["audio_codes"], overlap_seconds=10)
# 5. Save the reconstructed audio
for i, audio in enumerate(output_wav["audio_values"]):
torchaudio.save(f"audio_{i}.wav", audio.cpu(), 24000)
```
|
manancode/opus-mt-fr-sv-ctranslate2-android
|
manancode
| 2025-08-20T12:20:50Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:39Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-srn-ctranslate2-android
|
manancode
| 2025-08-20T12:20:23Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:13Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-srn-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-srn` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-srn
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-sn-ctranslate2-android
|
manancode
| 2025-08-20T12:20:10Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:00Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-sn-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-sn` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-sn
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755690732
|
vwzyrraz7l
| 2025-08-20T12:19:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:19:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fr-pon-ctranslate2-android
|
manancode
| 2025-08-20T12:18:04Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:17:55Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-pon-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-pon` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-pon
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-pl-ctranslate2-android
|
manancode
| 2025-08-20T12:17:52Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:17:43Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-pl-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-pl` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-pl
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-pap-ctranslate2-android
|
manancode
| 2025-08-20T12:17:27Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:17:18Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-pap-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-pap` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-pap
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-nso-ctranslate2-android
|
manancode
| 2025-08-20T12:17:01Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:16:52Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-nso-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-nso` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-nso
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
unitova/blockassist-bc-zealous_sneaky_raven_1755690649
|
unitova
| 2025-08-20T12:16:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:16:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fr-no-ctranslate2-android
|
manancode
| 2025-08-20T12:16:49Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:16:38Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-no-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-no` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-no
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-niu-ctranslate2-android
|
manancode
| 2025-08-20T12:16:36Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:16:26Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-niu-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-niu` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-niu
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
BoddyGus/dummy-model
|
BoddyGus
| 2025-08-20T12:16:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-20T12:15:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.