modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Azese/distilbert-imdb-sentiment-analysis
|
Azese
| 2025-09-02T08:46:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T08:37:42Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-imdb-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6972
- eval_model_preparation_time: 0.0023
- eval_accuracy: 0.4067
- eval_f1: 0.4035
- eval_runtime: 8.4538
- eval_samples_per_second: 35.487
- eval_steps_per_second: 2.248
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756802728
|
omerbkts
| 2025-09-02T08:45:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:45:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium
|
DavidAU
| 2025-09-02T08:45:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"finetune",
"brainstorm 20x",
"brainstorm",
"optional thinking",
"creative",
"all use cases",
"QiMing",
"QiMing-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"philosophy-driven-ai",
"conversational",
"en",
"fr",
"zh",
"de",
"base_model:aifeifei798/QiMing-v1.0-14B",
"base_model:finetune:aifeifei798/QiMing-v1.0-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:35:20Z |
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- programming
- code generation
- code
- codeqwen
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- chat
- qwen
- qwen-coder
- qwen3
- finetune
- brainstorm 20x
- brainstorm
- optional thinking
- creative
- all use cases
- QiMing
- QiMing-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- chat
- philosophy-driven-ai
base_model:
- aifeifei798/QiMing-v1.0-14B
pipeline_tag: text-generation
---
<h2>Qwen3-17B-QiMing-V1.0-Total-Recall-Medium</h2>
QiMing-v1.0-14B with Brainstorm 8x (by DavidAU) applied.
Part of project to benchmark Brainstorm versions.
[ more to come ]
|
kavpro/blockassist-bc-tall_lively_caribou_1756802634
|
kavpro
| 2025-09-02T08:44:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:44:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gubam/qwen2-2b-instruct-orientation
|
gubam
| 2025-09-02T08:43:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T15:10:59Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: qwen2-2b-instruct-orientation
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-2b-instruct-orientation
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gubam/qwen2-2b-instruct-orientation", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
giovannidemuri/llama3b-llama8b-er-v535-seed2-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-09-02T08:41:39Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T23:49:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756802261
|
bah63843
| 2025-09-02T08:38:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:38:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
deepak88/WikiGemma330M
|
deepak88
| 2025-09-02T08:38:30Z | 0 | 1 | null |
[
"pytorch",
"region:us"
] | null | 2025-09-01T14:41:24Z |
---
model-index:
- name: gemma-from-scratch
results: []
---
# My Gemma-like Model from Scratch
This model is a custom implementation of a Gemma-like architecture, trained from scratch.
## Training Details
- **Architecture**: A 18-layer decoder-only transformer with Grouped-Query Attention.
- **Data**: Trained on the Wikitext-2 dataset.
- **Training Script**: The training script is available on GitHub at [https://github.com/your_github_repo](https://github.com/your_github_repo).
- **Parameters**: Total trainable parameters: 330.64 million.
### Checkpointing
The training script includes a checkpointing mechanism. It automatically saves the model's progress every 50 steps and at the end of each epoch to a file named `checkpoint.pt`. You can resume training by simply re-running the script. The final model is saved as `pytorch_model.bin`.
### Early Stopping
To prevent overfitting, the training process includes early stopping based on the validation loss. The script will monitor the loss on a dedicated validation set and stop training if it does not improve for 2 consecutive epochs.
## Loading and Chatting with the Model
Since this model uses a custom architecture, it requires the model class definitions from the training script to be loaded.
Here's a step-by-step guide to get started:
1. **Install Required Libraries**:
```bash
pip install torch huggingface-hub tokenizers
```
2. **Copy the Model Architecture**:
Copy the `GemmaForCausalLM` and all its required sub-classes (`RMSNorm`, `RotaryPositionalEmbedding`, `MultiHeadAttention`, `MLP`, `TransformerBlock`) from this training script into your new Python file.
3. **Load the Model and Tokenizer**:
```python
import torch
from huggingface_hub import hf_hub_download
from tokenizers import Tokenizer
# Define your model's hyperparameters
config = {
"vocab_size": 30000,
"hidden_size": 1024,
"num_attention_heads": 8,
"num_key_value_heads": 1,
"num_layers": 18,
"intermediate_size": 4096,
"max_position_embeddings": 32768,
"attention_dropout": 0.0,
"hidden_dropout": 0.0,
"sliding_window": 512,
"device": "cuda" if torch.cuda.is_available() else "cpu"
}
# Instantiate the custom model and load the weights
model = GemmaForCausalLM(config)
model_path = hf_hub_download(repo_id="your_username/gemma-from-scratch", filename="pytorch_model.bin")
model.load_state_dict(torch.load(model_path, map_location=config["device"]))
model.to(config["device"]).eval()
# Load the tokenizer
tokenizer_path = hf_hub_download(repo_id="your_username/gemma-from-scratch", filename="tokenizer.json")
tokenizer = Tokenizer.from_file(tokenizer_path)
```
4. **Generate Text**:
```python
def generate_text(model, tokenizer, prompt, max_length=50):
input_ids = tokenizer.encode(prompt).ids
input_tensor = torch.tensor(input_ids).unsqueeze(0).to(config["device"])
with torch.no_grad():
for _ in range(max_length):
logits, _ = model(input_tensor)
next_token_logits = logits[:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(0)
input_tensor = torch.cat([input_tensor, next_token], dim=-1)
# Stop if we generate the end-of-sentence token
if next_token.item() == tokenizer.token_to_id("</s>"):
break
return tokenizer.decode(input_tensor[0].tolist(), skip_special_tokens=True)
# Example usage
prompt = "The early bird catches the worm, but the second mouse gets the "
generated_text = generate_text(model, tokenizer, prompt)
print("Generated Text:")
print(generated_text)
```
> **Note**: This model is for demonstration purposes. Its custom architecture is not directly compatible with the Hugging Face `transformers` library out-of-the-box. To use the model, you must also include the full model class definitions in your script.
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756802127
|
omerbektass
| 2025-09-02T08:35:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:35:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChakuChidiya/cheques_train_model_final_three
|
ChakuChidiya
| 2025-09-02T08:35:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base-finetuned-docvqa",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-docvqa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-02T08:31:11Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base-finetuned-docvqa
tags:
- generated_from_trainer
model-index:
- name: cheques_train_model_final_three
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cheques_train_model_final_three
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-docvqa](https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
NikolayKozloff/silly-v0.2-Q5_K_S-GGUF
|
NikolayKozloff
| 2025-09-02T08:35:20Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:wave-on-discord/silly-v0.2",
"base_model:quantized:wave-on-discord/silly-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T08:34:46Z |
---
license: apache-2.0
base_model: wave-on-discord/silly-v0.2
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/silly-v0.2-Q5_K_S-GGUF
This model was converted to GGUF format from [`wave-on-discord/silly-v0.2`](https://huggingface.co/wave-on-discord/silly-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wave-on-discord/silly-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/silly-v0.2-Q5_K_S-GGUF --hf-file silly-v0.2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/silly-v0.2-Q5_K_S-GGUF --hf-file silly-v0.2-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/silly-v0.2-Q5_K_S-GGUF --hf-file silly-v0.2-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/silly-v0.2-Q5_K_S-GGUF --hf-file silly-v0.2-q5_k_s.gguf -c 2048
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756800361
|
coelacanthxyz
| 2025-09-02T08:33:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:33:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756801901
|
akirafudo
| 2025-09-02T08:32:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:31:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756800367
|
capungmerah627
| 2025-09-02T08:31:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:31:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Pothong/llama3-chat-lora
|
Pothong
| 2025-09-02T08:30:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-07-21T07:51:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aman/CrEval-7b
|
Aman
| 2025-09-02T08:29:14Z | 0 | 1 | null |
[
"safetensors",
"arxiv:2505.19236",
"license:mit",
"region:us"
] | null | 2025-09-01T13:03:27Z |
---
license: mit
---
<a name="readme-top"></a>
<p align="center">
<img src="figs/favicon.svg" alt="Logo" width="150">
<h1 align="center">Evaluating Text Creativity across Diverse Domains:</br>A Dataset and a Large Language Model Evaluator</h1>
</p>
<div align="center">
<a href="https://creval-creative-evaluation.github.io/"><img src="https://img.shields.io/badge/Project%20Page-666?logo=googledocs&logoColor=FFE165&style=for-the-badge" alt="homepage"></a>
<a href="https://arxiv.org/pdf/2505.19236"><img src="https://img.shields.io/badge/arXiv%20paper-666?logo=arxiv&logoColor=FFE165&style=for-the-badge" alt="arXiv"></a>
<br/>
<a href="https://huggingface.co/datasets/Aman/CreataSet"><img src="https://img.shields.io/badge/CreataSet-dataset-blue?logo=databricks&logoColor=white&style=for-the-badge" alt="arXiv"></a>
<a href="https://huggingface.co/Aman/CrEval-7b"><img src="https://img.shields.io/badge/model-7b-purple?logo=huggingface&logoColor=yellow&style=for-the-badge" alt="arXiv"></a>
<a href="https://huggingface.co/Aman/CrEval-14b"><img src="https://img.shields.io/badge/model-14b-purple?logo=huggingface&logoColor=yellow&style=for-the-badge" alt="arXiv"></a>
<a href="https://github.com/Aman-4-Real/CrEval"><img src="https://img.shields.io/badge/github-code-black?logo=github&logoColor=white&style=for-the-badge" alt="arXiv"></a>
<br/>
<hr>
</div>
## 🔥 News
<div class="scrollable">
<ul>
<li><strong>[2025, Sep 01]</strong>: 🎉🎉We release the dataset <a href="https://huggingface.co/datasets/Aman/CreataSet">CreataSet</a> and out creativity evaluation model <a href="https://huggingface.co/Aman/CrEval-7b">CrEval-7b</a> & <a href="https://huggingface.co/Aman/CrEval-14b">CrEval-14b</a>. Feel free to use!</li>
<li><strong>[2025, May 25]</strong>: 🎉🎉Our <a href="https://arxiv.org/pdf/2505.19236">arXiv paper</a> is available! Check it out for more details.</li>
</ul>
</div>
<span id='table-of-contents'/>
## 📍 Brief Intro
We introduce **CrEval**, the 1st LLM-based evaluator for pairwise creativity evaluation, outperforming GPT-4o by 18.7% in human agreement, and **CreataSet**, a large-scale dataset of over **1M** creative instruction-response pairs across **87** domains. CrEval is a creativity evaluation model based on a pairwise comparison protocol, designed to advance automated evaluation of text creativity. CreataSet can facilitate the meta-evaluation of pairwise comparison models for assessing text creativity. Also, it can be used for training creative generation models. More details please refer to our [paper](https://arxiv.org/abs/2505.19236).
## Quickstart 🤗
You can use our CrEval model via the inference methods provided by [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
Please refer to our [GitHub repo](https://github.com/Aman-4-Real/CrEval) for more details.
<hr>
> *We respect and uphold the usage terms of the original data providers. If you believe that any part of this dataset affects your legal rights or raises other concerns, please reach out to us. We will carefully review your request and respond without delay.*
<h2> Please cite our paper if you find our work useful. </h2>
```
@article{cao2025evaluating,
title={Evaluating Text Creativity across Diverse Domains: A Dataset and Large Language Model Evaluator},
author={Cao, Qian and Wang, Xiting and Yuan, Yuzhuo and Liu, Yahui and Luo, Fang and Song, Ruihua},
journal={arXiv preprint arXiv:2505.19236},
year={2025}
}
```
For any questions, please feel free to reach me at caoqian4real@ruc.edu.cn.
|
bah63843/blockassist-bc-plump_fast_antelope_1756801645
|
bah63843
| 2025-09-02T08:28:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:28:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756801628
|
Ferdi3425
| 2025-09-02T08:28:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:27:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756801511
|
akirafudo
| 2025-09-02T08:25:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:25:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hlttxdy/STAR-1_DeepSeek-R1-Distill-Llama-8B_sft-complete-dpo
|
hlttxdy
| 2025-09-02T08:25:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T08:25:06Z |
---
license: apache-2.0
---
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756801074
|
2hpsatt
| 2025-09-02T08:19:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:19:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist-bc-tall_lively_caribou_1756801073
|
kavpro
| 2025-09-02T08:18:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:18:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Koto-Small-7B-IT-i1-GGUF
|
mradermacher
| 2025-09-02T08:18:23Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"writing",
"creative-writing",
"roleplay",
"en",
"base_model:Aurore-Reveil/Koto-Small-7B-IT",
"base_model:quantized:Aurore-Reveil/Koto-Small-7B-IT",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-02T07:25:56Z |
---
base_model: Aurore-Reveil/Koto-Small-7B-IT
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- writing
- creative-writing
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Aurore-Reveil/Koto-Small-7B-IT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Koto-Small-7B-IT-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Koto-Small-7B-IT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Koto-Small-7B-IT-i1-GGUF/resolve/main/Koto-Small-7B-IT.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756800990
|
liukevin666
| 2025-09-02T08:17:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:17:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arturkakraft/blockassist-bc-arctic_purring_camel_1756799880
|
arturkakraft
| 2025-09-02T08:17:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic purring camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:16:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic purring camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756800919
|
omerbkts
| 2025-09-02T08:15:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:15:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756799246
|
rvipitkirubbe
| 2025-09-02T08:15:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:15:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wbz0505/m2t-ft-from-GSPretrained-base
|
wbz0505
| 2025-09-02T08:13:35Z | 0 | 0 | null |
[
"pytorch",
"t5",
"arxiv:2504.02478",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T05:58:28Z |
---
license: apache-2.0
---
# Model Description
This is the Motion-to-Text (M2T) model in MG-MotionLLM.
See more details on: [Github Page & Code](https://github.com/BizhuWu/MG-MotionLLM) & [Paper](https://arxiv.org/abs/2504.02478)
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756800732
|
TohanBoss
| 2025-09-02T08:13:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:13:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rl-rag/qwen2.5-7b-combined-sft-training-data-v20250824_MiroSystemPrompt
|
rl-rag
| 2025-09-02T08:13:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T07:47:49Z |
---
library_name: transformers
license: other
base_model: qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-7b-combined-sft-training-data-v20250824_MiroSystemPrompt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-7b-combined-sft-training-data-v20250824_MiroSystemPrompt
This model is a fine-tuned version of [qwen/Qwen2.5-7B-Instruct](https://huggingface.co/qwen/Qwen2.5-7B-Instruct) on the rl-rag/combined-sft-training-data-v20250824_MiroSystemPrompt dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kimxxxx/phi_r32_a64_b8_gas4_lr5e-5_4500tk_3epoch
|
kimxxxx
| 2025-09-02T08:13:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T08:12:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sekirr/blockassist-bc-masked_tenacious_whale_1756800669
|
sekirr
| 2025-09-02T08:11:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:11:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756800611
|
omerbektass
| 2025-09-02T08:10:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:10:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756800513
|
TohanBoss
| 2025-09-02T08:09:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:09:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
okuzarabasi/Qwen3-0.6B-Gensyn-Swarm-flapping_marine_slug
|
okuzarabasi
| 2025-09-02T08:07:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am flapping_marine_slug",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T08:05:41Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am flapping_marine_slug
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756800346
|
liukevin666
| 2025-09-02T08:07:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:06:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChakuChidiya/cheques_train_model_final_one
|
ChakuChidiya
| 2025-09-02T08:07:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base-finetuned-docvqa",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-docvqa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-02T08:02:52Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base-finetuned-docvqa
tags:
- generated_from_trainer
model-index:
- name: cheques_train_model_final_one
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cheques_train_model_final_one
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-docvqa](https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF
|
VirtualKimi
| 2025-09-02T08:06:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"reinforcement-learning",
"agentic-reasoning",
"math-reasoning",
"tool-use",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:rstar2-reproduce/rStar2-Agent-14B",
"base_model:quantized:rstar2-reproduce/rStar2-Agent-14B",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T08:05:43Z |
---
language:
- en
- zh
license: mit
pipeline_tag: text-generation
tags:
- reinforcement-learning
- agentic-reasoning
- math-reasoning
- tool-use
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: rstar2-reproduce/rStar2-Agent-14B
---
# VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF
This model was converted to GGUF format from [`rstar2-reproduce/rStar2-Agent-14B`](https://huggingface.co/rstar2-reproduce/rStar2-Agent-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rstar2-reproduce/rStar2-Agent-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF --hf-file rstar2-agent-14b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF --hf-file rstar2-agent-14b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF --hf-file rstar2-agent-14b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo VirtualKimi/rStar2-Agent-14B-Q8_0-GGUF --hf-file rstar2-agent-14b-q8_0.gguf -c 2048
```
|
giovannidemuri/llama3b-llama8b-er-v534-seed2-seed2-hx-alpaca-fpt
|
giovannidemuri
| 2025-09-02T08:04:57Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T00:25:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pidbu/blockassist-bc-whistling_alert_shrew_1756800160
|
pidbu
| 2025-09-02T08:03:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:03:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jamesthong/qwen3-4B-16bit-grpo-finqa
|
jamesthong
| 2025-09-02T08:02:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T07:22:57Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** jamesthong
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756800121
|
omerbkts
| 2025-09-02T08:02:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:02:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-hairy_crested_fox_1756800133
|
AnerYubo
| 2025-09-02T08:02:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy crested fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:02:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy crested fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prithivMLmods/Qwen3-Medical-GRPO-GGUF
|
prithivMLmods
| 2025-09-02T08:02:10Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:lastmass/Qwen3_Medical_GRPO",
"base_model:quantized:lastmass/Qwen3_Medical_GRPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-02T06:03:40Z |
---
license: apache-2.0
language:
- en
base_model:
- lastmass/Qwen3_Medical_GRPO
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Qwen3-Medical-GRPO-GGUF**
> Qwen3_Medical_GRPO is a specialized medical language model fine-tuned from the Qwen3 base using Supervised Fine-Tuning (SFT) and enhanced with Group Relative Policy Optimization (GRPO) to deliver advanced performance in clinical case analysis, differential diagnosis, and medical reasoning tasks. The model is designed to provide both detailed, step-by-step reasoning (chain-of-thought) and clear, structured final answers, enabling greater transparency and reliability for healthcare professionals and research applications. By separating its internal analysis from synthesized conclusions, Qwen3_Medical_GRPO allows users to trace the logic behind clinical recommendations, optimizing accuracy and trustworthiness in complex medical scenarios.
## Model Files
| File Name | Quant Type | File Size |
| - | - | - |
| Qwen3-Medical-GRPO.BF16.gguf | BF16 | 8.05 GB |
| Qwen3-Medical-GRPO.F16.gguf | F16 | 8.05 GB |
| Qwen3-Medical-GRPO.F32.gguf | F32 | 16.1 GB |
| Qwen3-Medical-GRPO.Q2_K.gguf | Q2_K | 1.67 GB |
| Qwen3-Medical-GRPO.Q3_K_L.gguf | Q3_K_L | 2.24 GB |
| Qwen3-Medical-GRPO.Q3_K_M.gguf | Q3_K_M | 2.08 GB |
| Qwen3-Medical-GRPO.Q3_K_S.gguf | Q3_K_S | 1.89 GB |
| Qwen3-Medical-GRPO.Q4_K_M.gguf | Q4_K_M | 2.5 GB |
| Qwen3-Medical-GRPO.Q4_K_S.gguf | Q4_K_S | 2.38 GB |
| Qwen3-Medical-GRPO.Q5_K_M.gguf | Q5_K_M | 2.89 GB |
| Qwen3-Medical-GRPO.Q5_K_S.gguf | Q5_K_S | 2.82 GB |
| Qwen3-Medical-GRPO.Q6_K.gguf | Q6_K | 3.31 GB |
| Qwen3-Medical-GRPO.Q8_0.gguf | Q8_0 | 4.28 GB |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
Yagaoo/Qwen3-1.7B
|
Yagaoo
| 2025-09-02T08:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T07:11:39Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-1.7B-Base
---
# Qwen3-1.7B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-1.7B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 1.7B
- Number of Paramaters (Non-Embedding): 1.4B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-1.7B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-1.7B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
Darshan1101/llama-finetuned-recruitment-1
|
Darshan1101
| 2025-09-02T08:01:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T08:00:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756799994
|
TohanBoss
| 2025-09-02T08:01:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T08:00:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Snarcy/RedDino-base
|
Snarcy
| 2025-09-02T07:58:51Z | 51 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"red-blood-cells",
"hematology",
"medical-imaging",
"vision-transformer",
"dino",
"dinov2",
"feature-extraction",
"foundation-model",
"image-feature-extraction",
"dataset:Elsafty",
"dataset:Chula",
"dataset:DSE",
"arxiv:2508.08180",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
image-feature-extraction
| 2025-02-26T12:33:36Z |
---
datasets:
- Elsafty
- Chula
- DSE
library_name: timm
license: cc-by-4.0
pipeline_tag: image-feature-extraction
tags:
- red-blood-cells
- hematology
- medical-imaging
- vision-transformer
- dino
- dinov2
- feature-extraction
- foundation-model
model-index:
- name: RedDino-base
results:
- task:
type: image-classification
name: RBC Shape Classification
dataset:
name: Elsafty
type: Classification
metrics:
- type: Weighted F1
value: 88.1
- type: Balanced Accuracy
value: 89.3
- type: Accuracy
value: 88.2
- type: Weighted F1
value: 83.8
- type: Balanced Accuracy
value: 78.6
- type: Accuracy
value: 83.8
- type: Weighted F1
value: 85.9
- type: Balanced Accuracy
value: 57.9
- type: Accuracy
value: 86.0
---
# RedDino-base
**RedDino** is a self-supervised Vision Transformer foundation model specifically designed for **red blood cell (RBC)** image analysis.
It leverages a tailored version of the **DINOv2** framework, trained on a meticulously curated dataset of **1.25 million RBC images** from diverse acquisition modalities and sources.
This model excels at extracting robust, general-purpose features for downstream hematology tasks such as **shape classification**, **morphological subtype recognition**, and **batch-effect–robust analysis**.
Unlike general-purpose models pretrained on natural images, RedDino incorporates hematology-specific augmentations, architectural tweaks, and RBC-tailored data preprocessing, enabling **state-of-the-art performance** on multiple RBC benchmarks.
> 🧠 Developed by [Luca Zedda](https://orcid.org/0009-0001-8488-1612), [Andrea Loddo](https://orcid.org/0000-0002-6571-3816), [Cecilia Di Ruberto](https://orcid.org/0000-0003-4641-0307), and [Carsten Marr](https://orcid.org/0000-0003-2154-4552)
> 🏥 University of Cagliari & Helmholtz Munich
> 📄 Preprint: [arXiv:2508.08180](https://arxiv.org/abs/2508.08180)
> 💻 Code: [https://github.com/Snarci/RedDino](https://github.com/Snarci/RedDino)
---
## Model Details
- **Architecture:** ViT-base, patch size 14
- **SSL framework:** DINOv2 (customized for RBC morphology)
- **Pretraining dataset:** 1.25M RBC images from 18 datasets
- **Embedding size:** 768
- **Applications:** RBC morphology classification, feature extraction, batch-effect–robust analysis
## Example Usage
```python
from PIL import Image
from torchvision import transforms
import timm
import torch
# Load model from Hugging Face Hub
model = timm.create_model("hf_hub:Snarcy/RedDino-base", pretrained=True)
model.eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Load and preprocess image
image = Image.open("path/to/rbc_image.jpg").convert("RGB")
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
input_tensor = transform(image).unsqueeze(0).to(device)
# Extract features
with torch.no_grad():
embedding = model(input_tensor)
```
## 📝 Citation
If you use this model, please cite the following paper:
**RedDino: A foundation model for red blood cell analysis**
Luca Zedda, Andrea Loddo, Cecilia Di Ruberto, Carsten Marr — 2025
Preprint: arXiv:2508.08180. https://arxiv.org/abs/2508.08180
```bibtex
@misc{zedda2025reddinofoundationmodelred,
title={RedDino: A foundation model for red blood cell analysis},
author={Luca Zedda and Andrea Loddo and Cecilia Di Ruberto and Carsten Marr},
year={2025},
eprint={2508.08180},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.08180},
}
```
|
Snarcy/RedDino-small
|
Snarcy
| 2025-09-02T07:58:42Z | 28 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"red-blood-cells",
"hematology",
"medical-imaging",
"vision-transformer",
"dino",
"dinov2",
"foundation-model",
"image-feature-extraction",
"dataset:Elsafty",
"dataset:Chula",
"dataset:DSE",
"arxiv:2508.08180",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
image-feature-extraction
| 2025-02-26T08:35:37Z |
---
datasets:
- Elsafty
- Chula
- DSE
library_name: timm
license: cc-by-4.0
pipeline_tag: image-feature-extraction
tags:
- red-blood-cells
- hematology
- medical-imaging
- vision-transformer
- dino
- dinov2
- foundation-model
model-index:
- name: RedDino-small
results:
- task:
type: image-classification
name: RBC Shape Classification
dataset:
name: Elsafty
type: Classification
metrics:
- type: Weighted F1
value: 86.0
- type: Balanced Accuracy
value: 87.2
- type: Accuracy
value: 86.2
- type: Weighted F1
value: 84.3
- type: Balanced Accuracy
value: 78.5
- type: Accuracy
value: 84.4
- type: Weighted F1
value: 84.9
- type: Balanced Accuracy
value: 56.5
- type: Accuracy
value: 84.9
---
# RedDino: A foundation model for red blood cell analysis
[📄 Paper](https://arxiv.org/abs/2508.08180) | [💻 Code](https://github.com/Snarci/RedDino)
**RedDino** is a self-supervised Vision Transformer foundation model specifically designed for **red blood cell (RBC)** image analysis. This variant, **RedDino-small**, is the compact model in the family, delivering strong performance with lighter computational cost.
It leverages a tailored version of the **DINOv2** framework, trained on a meticulously curated dataset of 1.25 million RBC images from diverse acquisition modalities and sources. The model excels at extracting robust features for downstream hematology tasks such as **shape classification**, **morphological subtype recognition**, and **batch-effect–robust analysis**.
---
## Model Details
- **Architecture:** ViT-small, patch size 14
- **SSL framework:** DINOv2 (customized for RBC morphology)
- **Pretraining dataset:** Curated RBC images from 18 datasets (multiple modalities and sources)
- **Embedding size:** 384
- **Intended use:** RBC morphology classification, feature extraction, batch-effect–robust analysis
Notes:
- Trained with RBC-specific augmentations and DINOv2 customizations (e.g., removal of KoLeo regularizer; Sinkhorn-Knopp centering).
- Optimized using smear patches rather than only single-cell crops to improve generalization across sources.
## Example Usage
```python
from PIL import Image
from torchvision import transforms
import timm
import torch
# Load model from Hugging Face Hub
model = timm.create_model("hf_hub:Snarcy/RedDino-small", pretrained=True)
model.eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Load and preprocess image
image = Image.open("path/to/rbc_image.jpg").convert("RGB")
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
input_tensor = transform(image).unsqueeze(0).to(device)
# Extract features
with torch.no_grad():
embedding = model(input_tensor)
```
## 📝 Citation
If you use this model, please cite the following paper:
**RedDino: A foundation model for red blood cell analysis**
Luca Zedda, Andrea Loddo, Cecilia Di Ruberto, Carsten Marr — 2025
Preprint: arXiv:2508.08180. https://arxiv.org/abs/2508.08180
```bibtex
@misc{zedda2025reddinofoundationmodelred,
title={RedDino: A foundation model for red blood cell analysis},
author={Luca Zedda and Andrea Loddo and Cecilia Di Ruberto and Carsten Marr},
year={2025},
eprint={2508.08180},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.08180},
}
```
---
## Summary
RedDino is the first family of foundation models tailored for comprehensive red blood cell image analysis, using large-scale self-supervised learning to set new performance benchmarks and generalization standards for computational hematology. Models and pretrained weights are available for research and practical deployment.
|
Snarcy/RedDino-large
|
Snarcy
| 2025-09-02T07:58:30Z | 25 | 1 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"red-blood-cells",
"hematology",
"medical-imaging",
"vision-transformer",
"dino",
"dinov2",
"feature-extraction",
"foundation-model",
"image-feature-extraction",
"dataset:Elsafty",
"dataset:Chula",
"dataset:DSE",
"arxiv:2508.08180",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] |
image-feature-extraction
| 2025-02-26T12:40:13Z |
---
datasets:
- Elsafty
- Chula
- DSE
library_name: timm
license: cc-by-nc-4.0
pipeline_tag: image-feature-extraction
tags:
- red-blood-cells
- hematology
- medical-imaging
- vision-transformer
- dino
- dinov2
- feature-extraction
- foundation-model
model-index:
- name: RedDino-large
results:
- task:
type: image-classification
name: RBC Shape Classification
dataset:
name: Elsafty
type: Classification
metrics:
- type: Weighted F1
value: 88.5
- type: Balanced Accuracy
value: 89.1
- type: Accuracy
value: 88.4
- type: Weighted F1
value: 83.9
- type: Balanced Accuracy
value: 79.0
- type: Accuracy
value: 85.0
- type: Weighted F1
value: 86.6
- type: Balanced Accuracy
value: 60.1
- type: Accuracy
value: 86.6
---
# RedDino: A Foundation Model for Red Blood Cell Analysis
**RedDino** is a self-supervised Vision Transformer foundation model specifically designed for **red blood cell (RBC)** image analysis, as presented in the paper [RedDino: A foundation model for red blood cell analysis](https://arxiv.org/abs/2508.08180).
It leverages a tailored version of the **DINOv2** framework, trained on a meticulously curated dataset of **1.25 million RBC images** from diverse acquisition modalities and sources. This model excels at extracting robust, general-purpose features for downstream hematology tasks such as **shape classification**, **morphological subtype recognition**, and **batch-effect–robust analysis**.
Unlike general-purpose models pretrained on natural images, RedDino incorporates hematology-specific augmentations, architectural tweaks, and RBC-tailored data preprocessing, enabling **state-of-the-art performance** on multiple RBC benchmarks.
> 🧠 Developed by [Luca Zedda](https://orcid.org/0009-0001-8488-1612), [Andrea Loddo](https://orcid.org/0000-0002-6571-3816), [Cecilia Di Ruberto](https://orcid.org/0000-0003-4641-0307), and [Carsten Marr](https://orcid.org/0000-0003-2154-4552)
> 🏥 University of Cagliari & Helmholtz Munich
> 📄 Preprint: [arXiv:2508.08180](https://arxiv.org/abs/2508.08180)
> 💻 Code: [https://github.com/Snarci/RedDino](https://github.com/Snarci/RedDino)
---
## Model Details
- **Architecture:** ViT-large, patch size 14
- **SSL framework:** DINOv2 (customized for RBC morphology)
- **Pretraining dataset:** Curated RBC images from 18 datasets (multiple modalities and sources)
- **Embedding size:** 1024
- **Intended use:** RBC morphology classification, feature extraction, batch-effect–robust analysis
Notes:
- RBC-specific training strategy including removal of KoLeo regularizer and Sinkhorn-Knopp centering.
- Training on smear patches (not only single cells) to enhance cross-source generalization.
## Example Usage
```python
from PIL import Image
from torchvision import transforms
import timm
import torch
# Load model from Hugging Face Hub
model = timm.create_model("hf_hub:Snarcy/RedDino-large", pretrained=True)
model.eval()
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Load and preprocess image
image = Image.open("path/to/rbc_image.jpg").convert("RGB")
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
input_tensor = transform(image).unsqueeze(0).to(device)
# Extract features
with torch.no_grad():
embedding = model(input_tensor)
```
## Model Variants
RedDino comes in three sizes to suit different computational requirements and performance needs:
| Model Variant | Embedding Size | Parameters | Usage |
|---------------|----------------|------------|--------|
| **RedDino-small** | 384 | 22M | `timm.create_model("hf_hub:Snarcy/RedDino-small", pretrained=True)` |
| **RedDino-base** | 768 | 86M | `timm.create_model("hf_hub:Snarcy/RedDino-base", pretrained=True)` |
| **RedDino-large** | 1024 | 304M | `timm.create_model("hf_hub:Snarcy/RedDino-large", pretrained=True)` |
Choose the variant that best fits your computational budget and performance requirements. Larger models generally provide richer feature representations at the cost of increased computational overhead.
---
## Benchmark Results
RedDino was benchmarked on major RBC classification datasets—including Elsafty, Chula, and DSE—outperforming state-of-the-art baselines such as ResNet50, DinoBloom, and DINOv2.
| Model | Dataset | Metric | Linear Probing (wF1) | 1-NN (wF1) | 20-NN (wF1) |
|-------------------|-----------|-------------|----------------------|------------|-------------|
| ResNet50 | Elsafty | Weighted F1 | 77.6 ± 8.1 | 64.3 ± 4.8 | 66.2 ± 4.9 |
| DinoBloom-S | Elsafty | Weighted F1 | 83.2 ± 8.2 | 73.1 ± 5.1 | 76.5 ± 4.2 |
| DINOv2 (small) | Elsafty | Weighted F1 | 82.1 ± 8.2 | 73.5 ± 4.8 | 77.2 ± 4.6 |
| RedDino small | Elsafty | Weighted F1 | 86.0 ± 7.0 | 76.8 ± 4.9 | 80.0 ± 4.5 |
| RedDino base | Elsafty | Weighted F1 | 88.1 ± 4.9 | 78.8 ± 3.6 | 82.6 ± 2.8 |
| RedDino large | Elsafty | Weighted F1 | 88.5 ± 5.5 | 78.5 ± 4.6 | 81.6 ± 4.7 |
On Chula and DSE datasets, RedDino consistently surpassed all other models in feature quality (linear probing) with average improvements of 2–4% over prior approaches in key metrics.
---
## Highlights
- **Foundation model** for RBC analysis trained on the largest available multi-source RBC image set: 1.25M+ images, using advanced CellPose-based instance segmentation and patch extraction.
- **DINOv2-based self-supervised learning** for label-efficient pretraining and robust, transferable features.
- **Model architecture and key innovations**:
- Patch-based training (224×224 px) shown to outperform single-cell training.
- Novel data augmentation via Albumentations (32 pixel-level strategies).
- Removal of the Koleo regularizer and adoption of Sinkhorn-Knopp centering for improved representation in RBC-specific domains.
- Suite of models (small, base, large) covering 22M–304M parameters.
- **Generalization**: Strong adaptation across varied protocols, microscopes, and imaging sites. Demonstrated resistance to batch effects and out-of-domain variance.
- **Interpretability tools**: PCA/UMAP visualizations reveal clustering by phenotype and batch, distinguishing abnormal cells (e.g., malaria, echinocytes).
- **Easy deployment**: Models and code are available on [GitHub](https://github.com/Snarci/RedDino) and [Hugging Face](https://huggingface.co/collections/Snarcy/reddino-689a13e29241d2e5690202fc).
---
## 📝 Citation
If you use this model, please cite the following paper:
**RedDino: A foundation model for red blood cell analysis**
Luca Zedda, Andrea Loddo, Cecilia Di Ruberto, Carsten Marr — 2025
Preprint: arXiv:2508.08180. https://arxiv.org/abs/2508.08180
```bibtex
@misc{zedda2025reddinofoundationmodelred,
title={RedDino: A foundation model for red blood cell analysis},
author={Luca Zedda and Andrea Loddo and Cecilia Di Ruberto and Carsten Marr},
year={2025},
eprint={2508.08180},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.08180},
}
```
---
## Summary
RedDino is the first family of foundation models tailored for comprehensive red blood cell image analysis, using large-scale self-supervised learning to set new performance benchmarks and generalization standards for computational hematology. Models and pretrained weights are available for research and practical deployment.
|
RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft
|
RikiyaT
| 2025-09-02T07:58:21Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"license:mit",
"region:us"
] | null | 2025-09-02T05:24:26Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-32m-pretrained`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-32m-hotpot-rlhn-ft", trust_remote_code=True)
```
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756798324
|
lisaozill03
| 2025-09-02T07:58:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:58:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756799854
|
omerbektass
| 2025-09-02T07:57:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:57:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756799705
|
liukevin666
| 2025-09-02T07:56:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:56:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756799716
|
sekirr
| 2025-09-02T07:55:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:55:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756799608
|
bah63843
| 2025-09-02T07:54:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:54:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756799536
|
TohanBoss
| 2025-09-02T07:54:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:53:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756799603
|
akirafudo
| 2025-09-02T07:53:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:53:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
agosht/blockassist-bc-hunting_grassy_swan_1756798616
|
agosht
| 2025-09-02T07:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hunting grassy swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:52:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hunting grassy swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hadesgo/kontext_loras
|
hadesgo
| 2025-09-02T07:52:24Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-07-31T08:10:49Z |
---
license: apache-2.0
---
|
Flamehaven/CRoM-Context-Rot-Mitigation-EfficientLLM
|
Flamehaven
| 2025-09-02T07:50:23Z | 0 | 0 |
crom-efficientllm
|
[
"crom-efficientllm",
"rag",
"llm",
"retrieval",
"rerank",
"reranker",
"context-management",
"prompt-engineering",
"observability",
"python",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T07:43:24Z |
---
language: en
license: apache-2.0
library_name: crom-efficientllm
tags:
- rag
- llm
- retrieval
- rerank
- reranker
- context-management
- prompt-engineering
- observability
- python
---
# CRoM-Context-Rot-Mitigation--EfficientLLM: Context Reranking and Management for Efficient LLMs
<p align="left">
<a href="https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM/actions">
<img alt="CI" src="https://img.shields.io/github/actions/workflow/status/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM/ci.yml?branch=main" />
</a>
<a href="#-benchmarks">
<img alt="Bench" src="https://img.shields.io/badge/benchmarks-ready-success" />
</a>
<a href="LICENSE">
<img alt="License" src="https://img.shields.io/badge/license-Apache%202.0-blue" />
</a>
<a href="https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM/releases">
<img alt="Release" src="https://img.shields.io/github/v/release/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM?display_name=tag" />
</a>
<a href="CHANGELOG.md">
<img alt="Versioning" src="https://img.shields.io/badge/semver-0.2.x-lightgrey" />
</a>
<a href="https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM/releases/latest">
<img alt="Wheel" src="https://img.shields.io/badge/wheel-available-success" />
</a>
</p>
**CRoM (Context Rot Mitigation)-EfficientLLM** is a Python toolkit designed to optimize the context provided to Large Language Models (LLMs). It provides a suite of tools to intelligently select, re-rank, and manage text chunks to fit within a model\'s context budget while maximizing relevance and minimizing performance drift.
This project is ideal for developers building RAG (Retrieval-Augmented Generation) pipelines who need to make the most of limited context windows.
## Key Features
* **Budget Packer:** Greedily packs the highest-scoring text chunks into a defined token budget using a stable sorting algorithm.
* **Hybrid Reranker:** Combines sparse (TF-IDF) and dense (Sentence-Transformers) retrieval scores for robust and high-quality reranking of documents.
* **Drift Estimator:** Monitors the semantic drift between sequential model responses using L2 or cosine distance with EWMA smoothing.
* **Observability:** Exposes Prometheus metrics for monitoring token savings and drift alerts in production.
* **Extensible Plugins:** Supports optional plugins for advanced reranking (`FlashRank`), compression (`LLMLingua`), and drift analysis (`Evidently`).
* **Comprehensive Benchmarking:** Includes a CLI for end-to-end pipeline evaluation, budget sweeps, and quality-vs-optimal analysis.
## Installation
Install the package directly from source using pip. For development, it\'s recommended to install in editable mode with the `[dev]` extras.
```bash
# Clone the repository
git clone https://github.com/Flamehaven/CRoM-Context-Rot-Mitigation--EfficientLLM.git
cd CRoM-Context-Rot-Mitigation--EfficientLLM
# Install in editable mode with development and plugin dependencies
pip install -e .[dev,plugins]
```
## Quickstart
### Demo
Run a simple, self-contained demonstration of the core components:
```bash
# Run the demo script
crom-demo demo
```
### CLI Benchmarking Examples
The package includes a powerful `crom-bench` CLI for evaluation.
```bash
# Default E2E (Search→Rerank→Pack→Mock LLM)
crom-bench e2e --budget 0.3
# Optional: High-precision configuration with plugins
crom-bench e2e --budget 0.3 \
--use-flashrank --flashrank-model ms-marco-TinyBERT-L-2-v2 \
--use-llmlingua --compress-ratio=0.6 \
--use-evidently
```
### Plotting
If `matplotlib` is installed (`pip install -e .[dev]`), you can save benchmark plots directly:
```bash
# Save budget sweep result plots
crom-bench sweep --save-plots
# Save DP-curve plots
crom-bench dp-curve --save-plots
```
## Release & Changelog
This project follows semantic versioning. For detailed changes, see the [**CHANGELOG.md**](CHANGELOG.md).
Releases are automated via GitHub Actions when a `v*` tag is pushed.
## License
This project is licensed under the Apache 2.0 License. See the [LICENSE](LICENSE) file for details.
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756799278
|
TohanBoss
| 2025-09-02T07:49:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:49:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756799325
|
bah63843
| 2025-09-02T07:49:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:49:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756799248
|
akirafudo
| 2025-09-02T07:47:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:47:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Eskender/mol-base-from-processed-2408
|
Eskender
| 2025-09-02T07:47:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T07:47:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756799130
|
omerbektass
| 2025-09-02T07:45:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:45:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bugeun/MyGemmaNPC
|
bugeun
| 2025-09-02T07:43:16Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T04:21:51Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bugeun/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/natural-noob-xl-eps-anime-furry-general-v40-sdxl
|
John6666
| 2025-09-02T07:42:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"furry",
"anthro",
"aesthetic",
"color",
"knowledge",
"accuracy",
"details",
"creative",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:merge:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-02T07:34:09Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- furry
- anthro
- aesthetic
- color
- knowledge
- accuracy
- details
- creative
- merge
- noobai
- illustrious
base_model:
- OnomaAIResearch/Illustrious-XL-v1.0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1761682?modelVersionId=2173969).
This model created by [DarkFawkes](https://civitai.com/user/DarkFawkes).
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756798845
|
TohanBoss
| 2025-09-02T07:41:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:41:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st
|
RikiyaT
| 2025-09-02T07:39:41Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-02T04:44:38Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 7999 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4729, 0.1579],
# [0.4729, 1.0000, 0.1403],
# [0.1579, 0.1403, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.7.1+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RikiyaT/mxbai-ettin-32m-nq-rlhn-ft
|
RikiyaT
| 2025-09-02T07:39:34Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"license:mit",
"region:us"
] | null | 2025-09-02T04:44:28Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-32m-nq-rlhn-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-32m-pretrained`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-32m-nq-rlhn-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-32m-nq-rlhn-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-32m-nq-rlhn-ft", trust_remote_code=True)
```
|
SPRINGLab/v2-shiksha-MT-nllb-3.3B
|
SPRINGLab
| 2025-09-02T07:39:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"seq2seq",
"translation",
"en-ta",
"en-te",
"en-mr",
"en-gu",
"en-hi",
"en-pa",
"en-bn",
"en-ml",
"en-kn",
"base_model:facebook/nllb-200-3.3B",
"base_model:finetune:facebook/nllb-200-3.3B",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-09-02T07:30:49Z |
---
library_name: transformers
tags:
- seq2seq
- translation
- en-ta
- en-te
- en-mr
- en-gu
- en-hi
- en-pa
- en-bn
- en-ml
- en-kn
base_model: facebook/nllb-200-3.3B
---
# Model Card for v2-shiksha-MT-nllb-3.3B
This is a fine-tuned version of Meta's **NLLB-200-3.3B** model, adapted for high-quality translation between English and multiple Indic languages. The model was trained using the Parameter-Efficient Fine-Tuning (PEFT) method, specifically LoRA, making it efficient while maintaining high performance.
The fine-tuning was performed on a diverse, combined dataset consisting of both technical lectures (from the Shiksha dataset) and general domain text (from the BPCC dataset), making the model versatile for a range of translation tasks.
## Model Details
### Model Description
- **Developed by:** Samriddhi Kashyap, Advait Joglekar, S. Umesh
- **Model type:** `seq2seq` (Sequence-to-Sequence)
- **Language(s) (NLP):**
- English (`eng_Latn`)
- Tamil (`tam_Taml`)
- Telugu (`tel_Telu`)
- Marathi (`mar_Deva`)
- Gujarati (`guj_Gujr`)
- Hindi (`hin_Deva`)
- Punjabi (`pan_Guru`)
- Bengali (`ben_Beng`)
- Malayalam (`mal_Mlym`)
- Kannada (`kan_Knda`)
- **License:** **CC-BY-NC 4.0** (Creative Commons Attribution-NonCommercial 4.0 International)
- **Finetuned from model:** `facebook/nllb-200-3.3B`
### Model Sources
- **Repository:** `https://huggingface.co/SPRINGLab/v2-shiksha-MT-nllb-3.3B`
### Direct Use
This model is intended for direct use in translation tasks between English and the Indic languages it was trained on. It can be loaded using the `transformers` and `peft` libraries.
## Training Details
### Training Data
The model was fine-tuned on a concatenation of two datasets:
1. **Shiksha (Technical Domain):** A dataset containing parallel text from technical lectures.
- Dataset ID: `Samriddhikay/combined_netpx_shiksha_v2`
2. **BPCC (General Domain):** A cleaned dataset of general-purpose text.
- Dataset ID: `SPRINGLab/BPCC_cleaned`
The combined dataset contains **1,067,313** training samples. Invalid or empty samples were filtered out before training.
### Training Procedure
#### Preprocessing
The text was tokenized using the `NllbTokenizerFast`. For each `(source, target)` pair, the source and target language codes were set on the tokenizer to ensure correct multilingual tokenization. Sequences were padded and truncated to a maximum length of **400** tokens. The standard practice of replacing padding token IDs in the labels with `-100` was used to ignore them in the loss calculation.
#### Training Hyperparameters
The model was trained using the `Seq2SeqTrainer` from the `transformers` library with the following settings:
- **Framework:** PEFT (LoRA)
- **`r` (LoRA rank):** 256
- **`lora_alpha`:** 512
- **`lora_dropout`:** 0.1
- **`use_rslora`:** True
- **Target Modules:** all-linear
- **Learning Rate:** 4e-5
- **Batch Size (per device):** 8
- **Gradient Accumulation Steps:** 4 (Effective batch size of 32 per device)
- **Optimizer:** Adafactor
- **Number of Epochs:** 5
- **Warmup Ratio:** 0.1
- **Weight Decay:** 0.01
- **Training regime:** `bf16 mixed precision`
### Model Architecture and Objective
This model is a standard Transformer-based sequence-to-sequence model (NLLB). The fine-tuning was performed using LoRA adapters, which injects trainable rank-decomposition matrices into the specified modules of the base model, significantly reducing the number of trainable parameters. The model was trained on a standard text-to-text language modeling objective.
## Authors
Samriddhi Kashyap, Advait Joglekar, S. Umesh
|
hnv2520/LNG_Qwen2.5VL_32B_500st_4b
|
hnv2520
| 2025-09-02T07:38:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-to-text
| 2025-09-02T07:28:47Z |
---
base_model: unsloth/qwen2.5-vl-32b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hnv2520
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-32b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756798320
|
TohanBoss
| 2025-09-02T07:33:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:33:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756798378
|
bah63843
| 2025-09-02T07:33:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:33:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-stalking_tawny_warthog_1756798405
|
AnerYubo
| 2025-09-02T07:33:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stalking tawny warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:33:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stalking tawny warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-swagen-combined-15hrs-42-DAT
|
csikasote
| 2025-09-02T07:33:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-02T07:13:09Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-combined-15hrs-42-DAT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-combined-15hrs-42-DAT
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Wer: 0.2181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 7.1667 | 0.1594 | 200 | 2.3685 | 1.0 |
| 1.7283 | 0.3189 | 400 | 0.3324 | 0.2126 |
| 1.307 | 0.4783 | 600 | 0.3190 | 0.2146 |
| 1.2251 | 0.6377 | 800 | 0.2974 | 0.2180 |
| 1.2202 | 0.7971 | 1000 | 0.3091 | 0.2224 |
| 1.1953 | 0.9566 | 1200 | 0.3171 | 0.2246 |
| 1.1552 | 1.1156 | 1400 | 0.3280 | 0.2298 |
| 1.1595 | 1.2750 | 1600 | 0.3137 | 0.2345 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
ChenWu98/numina_qwen_2.5_sft_combine_v2_source_anneal_split_0
|
ChenWu98
| 2025-09-02T07:32:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0",
"base_model:finetune:ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T07:31:51Z |
---
base_model: ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v2_source_anneal_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v2_source_anneal_split_0
This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0](https://huggingface.co/ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/nynzl3xz)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756796923
|
GroomerG
| 2025-09-02T07:31:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:31:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756798197
|
omerbkts
| 2025-09-02T07:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:30:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
StarFighter12/GLM-Steam-106B-A12B-v1-GGUF
|
StarFighter12
| 2025-09-02T07:30:20Z | 21 | 0 | null |
[
"gguf",
"base_model:TheDrummer/GLM-Steam-106B-A12B-v1",
"base_model:quantized:TheDrummer/GLM-Steam-106B-A12B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-31T19:33:40Z |
---
base_model:
- TheDrummer/GLM-Steam-106B-A12B-v1
---
TheDrummer's GLM Steam quantized using ik_llama.cpp
first attempt at quantizing something "on my own"
ive tried using both bartowski's and mradermacher's imatrix files but wasn't able to use any of them and had to make one myself from the guide (skill issue)
this quant requires ik_llama.cpp fork to work properly
followed ubergarm's quant cookers basic guide but since i had no idea what i was doing i just copied his recipes and applied it on TheDrummer's model
also used general calibration data instead of rp focused so performance may suffer a bit
feel free to roast me if i messed something up (which i certainly did)
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756798084
|
akirafudo
| 2025-09-02T07:28:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:28:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756797978
|
2hpsatt
| 2025-09-02T07:27:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:27:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Austral-70B-Preview-GGUF
|
mradermacher
| 2025-09-02T07:27:00Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"roleplay",
"finetune",
"axolotl",
"creative-writing",
"70B",
"llama",
"en",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:NewEden/LIMARP-Complexity",
"dataset:NewEden/PIPPA-Mega-Filtered",
"dataset:NewEden/OpenCAI-ShareGPT",
"dataset:NewEden/Creative_Writing-Complexity",
"dataset:NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:NewEden/Books-V2-ShareGPT",
"dataset:NewEden/Deepseek-V3-RP-Filtered",
"dataset:NewEden/BlueSky-10K-Complexity",
"dataset:NewEden/Final-Alpindale-LNs-ShareGPT",
"dataset:NewEden/DeepseekRP-Filtered",
"dataset:NewEden/RP-logs-V2-Experimental",
"dataset:anthracite-org/kalo_opus_misc_240827",
"dataset:anthracite-org/kalo_misc_part2",
"dataset:NewEden/vanilla-backrooms-claude-sharegpt",
"dataset:NewEden/Storium-Prefixed-Clean",
"base_model:Delta-Vector/Austral-70B-Preview",
"base_model:quantized:Delta-Vector/Austral-70B-Preview",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T17:13:28Z |
---
base_model: Delta-Vector/Austral-70B-Preview
datasets:
- PocketDoc/Dans-Personamaxx-VN
- NewEden/LIMARP-Complexity
- NewEden/PIPPA-Mega-Filtered
- NewEden/OpenCAI-ShareGPT
- NewEden/Creative_Writing-Complexity
- NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
- PocketDoc/Dans-Failuremaxx-Adventure-3
- NewEden/Books-V2-ShareGPT
- NewEden/Deepseek-V3-RP-Filtered
- NewEden/BlueSky-10K-Complexity
- NewEden/Final-Alpindale-LNs-ShareGPT
- NewEden/DeepseekRP-Filtered
- NewEden/RP-logs-V2-Experimental
- anthracite-org/kalo_opus_misc_240827
- anthracite-org/kalo_misc_part2
- NewEden/vanilla-backrooms-claude-sharegpt
- NewEden/Storium-Prefixed-Clean
language:
- en
library_name: transformers
license: llama3.3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- roleplay
- finetune
- axolotl
- creative-writing
- 70B
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Delta-Vector/Austral-70B-Preview
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Austral-70B-Preview-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Austral-70B-Preview-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Austral-70B-Preview-GGUF/resolve/main/Austral-70B-Preview.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Pothong/mistral-7b-nolora
|
Pothong
| 2025-09-02T07:25:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-21T09:37:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zaydzuhri/top-code-1.8B-4096-model
|
zaydzuhri
| 2025-09-02T07:25:41Z | 0 | 0 | null |
[
"safetensors",
"top_transformer",
"region:us"
] | null | 2025-09-02T07:14:58Z |
<div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
Welcome to 🔥 `flame`, a minimal and efficient framework built on `torchtitan` for training Flash Linear Attention (FLA) models (and more broadly, arbitrary autoregressive language models) with blazing efficiency.
**Feature Highlights:**
- 🚀 Minimal, easy-to-use, extensible training framework
- 🤗 Seamless integration with `fla` and `transformers`
- 🔄 Zero-cost data preprocessing: online tokenization, dataset shuffling, and multiple datasets support
- 🔮 4D parallelism (coming soon)
## Setup
To get started, clone the `flame` repository and install the required dependencies:
```bash
git clone https://github.com/fla-org/flame.git
cd flame
pip install .
```
`flame` manages minimal dependencies, only including `fla` and `torchtitan` as submodules.
After installation, initialize and update the submodules:
```sh
git submodule update --init --recursive
```
## Dataset Preparation
To download the dataset to your local disk, create a new Python file with the following content and execute it:
```py
from datasets import load_dataset
# load fineweb-edu with parallel processing
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="default", num_proc=64, cache_dir="/your/cache/path")
# or load a subset with roughly 100B tokens, suitable for small- or medium-sized experiments
dataset = load_dataset("HuggingFaceFW/fineweb-edu", name="sample-100BT", num_proc=64, cache_dir="/your/cache/path")
```
## Training Recipes
Here's an example of training a 340M FLA Transformer model with a LLaMA-like architecture from scratch on a 100BT subset of the Fineweb-edu corpus in streaming mode.
> [!WARNING]
> If the dataset is not downloaded beforehand, the streaming mode will attempt to fetch it from a remote server and download it on-the-fly, which can be highly unstable during training due to network issues.
> For stable training, ensure the dataset is downloaded locally (see [**Dataset Preparation**](#dataset-preparation)). Otherwise, we assume you are only testing the new corpus.
```sh
bash train.sh \
--job.config_file flame/models/fla.toml \
--job.dump_folder exp/transformer-340M-4K-10B/batch1.seqlen65536.context4096.warmup1024.update1.steps20480.lr3e-4.cosine \
--model.config configs/transformer_340M.json \
--model.tokenizer_path fla-hub/transformer-1.3B-100B \
--optimizer.name AdamW \
--optimizer.eps 1e-15 \
--optimizer.lr 3e-4 \
--lr_scheduler.warmup_steps 1024 \
--lr_scheduler.lr_min 0.1 \
--lr_scheduler.decay_type cosine \
--training.batch_size 1 \
--training.seq_len 65536 \
--training.context_len 4096 \
--training.varlen \
--training.gradient_accumulation_steps 1 \
--training.steps 20480 \
--training.max_norm 1.0 \
--training.skip_nan_inf \
--training.dataset HuggingFaceFW/fineweb-edu \
--training.dataset_name sample-100BT \
--training.dataset_split train \
--training.streaming \
--training.num_workers 32 \
--training.prefetch_factor 2 \
--training.seed 42 \
--training.compile \
--checkpoint.interval 2048 \
--checkpoint.load_step -1 \
--checkpoint.keep_latest_k 2 \
--metrics.log_freq 1
```
You can specify the number of GPUs by setting the environment variable `NGPU`, which defaults to 8.
**For single-GPU debugging, set `NGPU=1`.**
We provide several [config files](https://github.com/fla-org/flame/tree/main/configs) for different models.
By default, the learning rate is set to 3e-4 with a cosine scheduler. Other schedulers, such as WSD (wsd), are also supported.
**Key parameters:**
- `--lr_scheduler.decay_ratio`: The proportion of the steps allocated to the decay phase. The learning rate will remain stable after the warmup period and only start decaying during the last `decay_ratio` portion of the total training steps, which is known as the Warmup-Stable-Decay (WSD) schedule.
- `--lr_scheduler.warmup_steps`: The number of steps for the learning rate warmup phase.
- `--training.steps`: Total number of training steps.
- `--training.batch_size`: Batch size per device, must be 1 if `--training.varlen` is set.
- `--training.seq_len`: The length of each sequence in the batch, which is concatenated from multiple samples.
- `--training.context_len`: The max allowed length of a sample. For non-varlen mode, this is equivalent to `seq_len`.
- `--training.varlen`: Whether to conduct variable-length sequence training.
- `--training.gradient_accumulation_steps`: Number of gradient accumulation steps.
> [!WARNING]
> The total number of tokens processed per batch, referred to as `global_batch_size`, is calculated as batch_size × gradient_accumulation_steps × num_gpus.
> Each step processes `global_batch_size * seq_len` tokens.
> Monitor the value of `global_batch_size`, `warmup_steps`, and `steps` carefully when modifying any of the hyperparameters!
For a detailed explanation of all parameters, run:
```sh
bash train.sh -h
```
<details>
<summary>Usage</summary>
```py
options:
-h, --help show this help message and exit
--job.config_file JOB.CONFIG_FILE
Job config file
--job.dump_folder JOB.DUMP_FOLDER
Folder to dump job outputs
--job.description JOB.DESCRIPTION
Description of the job
--job.use_for_integration_test
Add this config to the integration test suite
--job.print_args Print the args to terminal
--model.config MODEL.CONFIG
Path to the model config
--model.norm_type MODEL.NORM_TYPE
Type of layer normalization to use [layernorm,
np_layernorm, rmsnorm, fused_rmsnorm]
--model.tokenizer_path MODEL.TOKENIZER_PATH
Tokenizer path
--profiling.enable_profiling
Whether to enable pytorch profiler
--profiling.save_traces_folder PROFILING.SAVE_TRACES_FOLDER
Trace files location
--profiling.profile_freq PROFILING.PROFILE_FREQ
How often to collect profiler traces, in iterations
--profiling.enable_memory_snapshot
Whether to dump memory snapshot
--profiling.save_memory_snapshot_folder PROFILING.SAVE_MEMORY_SNAPSHOT_FOLDER
Memeory snapshot files location
--optimizer.name OPTIMIZER.NAME
Optimizer to use
--optimizer.eps OPTIMIZER.EPS
Epsilon value for the optimizer.
--optimizer.fused Whether the fused implementation(CUDA only) is used.
--optimizer.scheduler {wsd,cosine,linear}
Scheduler to use. Currently supported: wsd, cosine,
and linear.
--optimizer.lr OPTIMIZER.LR
Learning rate to use
--optimizer.min_lr_ratio OPTIMIZER.MIN_LR_RATIO
Min lr ratio for lr scheduler
--optimizer.early_step_in_backward
Whether to apply optimizer in the backward. Caution,
optimizer_in_backward is not compatible with gradients
clipping, users should not call
register_post_accumulate_grad_hook after the optimizer
is built.
--training.batch_size TRAINING.BATCH_SIZE
Batch size
--training.seq_len TRAINING.SEQ_LEN
Sequence length
--training.context_len TRAINING.CONTEXT_LEN
Max length allowed for each sequence
--training.varlen Whether to take sequences of variable length as input
--training.warmup_steps TRAINING.WARMUP_STEPS
Steps for lr scheduler warmup, normally 1/5 of
--training.steps
--training.gradient_accumulation_steps TRAINING.GRADIENT_ACCUMULATION_STEPS
Number of steps to accumulate gradients before
updating parameters
--training.steps TRAINING.STEPS
How many train steps to run
--training.max_norm TRAINING.MAX_NORM
Max norm for gradient clipping
--training.skip_nan_inf
Skip batch updates when NaN or INF gradients are
encountered during training
--training.dataset TRAINING.DATASET
Dataset to use, with comma separated values
--training.dataset_name TRAINING.DATASET_NAME
The name of the dataset config, with comma separated
values if provided
--training.dataset_split TRAINING.DATASET_SPLIT
Dataset split to use, with comma separated values if
provided
--training.data_dir TRAINING.DATA_DIR
Data dirs to use, with comma separated values if
provided
--training.data_files TRAINING.DATA_FILES
Data files to use, with comma separated values if
provided
--training.data_probs TRAINING.DATA_PROBS
Data sampling probabilities, with comma separated
values if provided
--training.streaming Whether to load dataset in streaming mode, used for
huge dataset
--training.num_workers TRAINING.NUM_WORKERS
Number of subprocesses to use for data loading. 0
means that the data will be loaded in the main
process.
--training.prefetch_factor TRAINING.PREFETCH_FACTOR
Number of batches loaded in advance by each worker.2
means there will be a total of 2 * num_workers batches
prefetched across all workers.
--training.data_parallel_replicate_degree TRAINING.DATA_PARALLEL_REPLICATE_DEGREE
The `data_parallel_replicate_degree` argument
specifies the degree of data parallelism for weight
replication. When this value is greater than 1,
weights will be replicated across
`data_parallel_replicate_degree` ranks. If
`data_parallel_shard_degree` is also greater than 1,
the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is DDP (Distributed Data Parallelism). 1 means
disabled.
--training.data_parallel_shard_degree TRAINING.DATA_PARALLEL_SHARD_DEGREE
The `data_parallel_shard_degree` argument specifies
the degree of data parallelism for weight sharding.
When this value is greater than 1, weights will be
sharded across `data_parallel_shard_degree` ranks. If
`data_parallel_replicate_degree` is also greater than
1, the parallelism method used is HSDP (Hybrid Sharded
Data Parallelism). Otherwise, the parallelism method
used is FSDP (Fully Sharded Data Parallelism). -1
means leftover ranks will be used (After
DP_REPLICATE/SP/PP). Note that only
`data_parallel_shard_degree` can be negative. 1 means
disabled.
--training.enable_cpu_offload
Whether to apply CPU offloading of parameters,
gradients, and optimizer states in FSDP
--training.tensor_parallel_degree TRAINING.TENSOR_PARALLEL_DEGREE
Tensor Parallelism degree. 1 means disabled.
--training.disable_loss_parallel
Whether to apply loss parallel when sequence parallel
is enabled
--training.mixed_precision_param {bfloat16,float32}
torch dtype to use for parameters when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.mixed_precision_reduce {float32}
torch dtype to use for reductions when applying mixed
precision via FSDP. This feature only takes effect
when data_parallel_shard_degree > 1
--training.compile Whether to compile the model
--training.gc_freq TRAINING.GC_FREQ
Python garbage control scheduling interval, in steps
--training.seed TRAINING.SEED
Choose the base RNG seed used for training
--training.deterministic
Use deterministic algorithms wherever possible, may be
slower
--metrics.log_freq METRICS.LOG_FREQ
How often to log metrics to TensorBoard, in iterations
--metrics.enable_tensorboard
Whether to log metrics to TensorBoard
--metrics.disable_color_printing
Whether to disable color printing in logs
--metrics.save_tb_folder METRICS.SAVE_TB_FOLDER
Folder to dump TensorBoard states
--metrics.rank_0_only
Whether to save TensorBoard metrics only for rank 0 or
for all ranks. When pipeline_parallel_degree is > 1,
this option uses the 0th rank of the last stage
pipeline group, which is the only stage that computes
loss metrics.
--metrics.enable_wandb
Whether to log metrics to Weights & Biases
--experimental.enable_async_tensor_parallel
Whether to apply async tensor parallel (currently only
effective when compile is enabled)
--experimental.pipeline_parallel_degree EXPERIMENTAL.PIPELINE_PARALLEL_DEGREE
Pipeline Parallelism degree, or number of ranks. 1
means disabled. If using looped schedules, this still
specifies the number of physical ranks, not the number
of stages. Stages per rank are inferred from split
points degree, and schedule.
--experimental.pipeline_parallel_split_points EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS [EXPERIMENTAL.PIPELINE_PARALLEL_SPLIT_POINTS ...]
Specify comma-separated names of modules to use as the
beginning of a split point. e.g. "layers.0,layers.2"
will cause the model to be split into 3 stages, the
first containing all the layers up to layers.0, the
second containing layers.0 and up to layers.2, the
third containing layers.2 and all the remaining
layers. Note: fully-automated splitting may be enabled
in the future, but currently the split points must be
specified manually.
--experimental.pipeline_parallel_schedule EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE
Specify the Pipeline Parallel schedule to use. The
supported schedules are: https://github.com/pytorch/py
torch/blob/de4c2a3b4e89d96334dc678d1c3f2ae51a6630a0/to
rch/distributed/pipelining/schedules.py#L2161. The
schedule must be compatible with the split points and
stages_per_rank. Looped schedules (e.g.
Interleaved1F1B) require specifying
pipeline_parallel_degree = number of ranks, and
split_points = number of stages - 1
--experimental.pipeline_parallel_schedule_csv EXPERIMENTAL.PIPELINE_PARALLEL_SCHEDULE_CSV
Specify the path to the pipeline parallel schedule csv
file to use. The pipeline_parallel_schedule argument
must be either PipelineScheduleSingle,
PipelineScheduleMulti, or _PipelineScheduleRuntime.
--experimental.pipeline_parallel_microbatches EXPERIMENTAL.PIPELINE_PARALLEL_MICROBATCHES
How many microbatches to split the global training
batch into when using pipeline parallelism. The global
training batch size must be evenly divisible by the
number of microbatches. The default value will be the
number of pipeline stages, if unspecified.
--experimental.enable_compiled_autograd
Enable CompiledAutograd to compile the backward.
--experimental.context_parallel_degree EXPERIMENTAL.CONTEXT_PARALLEL_DEGREE
Context parallelism degree. 1 means disabled.
--experimental.context_parallel_rotate_method EXPERIMENTAL.CONTEXT_PARALLEL_ROTATE_METHOD
The collective to use in context parallel SDPA for kv
shards exchange. 'allgather' means to all-gather all
kv shards on ranks after the first sub-SDPA
computation, 'alltoall' means to all-to-all shuffle
the kv shards. The default value is 'allgather'.
--checkpoint.enable_checkpoint
Whether to enable checkpoint
--checkpoint.folder CHECKPOINT.FOLDER
The folder to store the checkpoints. When
enable_checkpoint is set to true, checkpoints will be
in {--job.dump_folder}/{--checkpoint.folder}.
--checkpoint.interval_type CHECKPOINT.INTERVAL_TYPE
Checkpointing interval unit of measurement ['step',
'seconds']
--checkpoint.interval CHECKPOINT.INTERVAL
Checkpointing interval, in steps or seconds depending
on --checkpoint.interval_type
--checkpoint.model_weights_only
When model_weights_only=True, only model weights will
be saved at the end of training. With this,
checkpoints can be loaded using `torch.load(...,
weights_only=True)` after conversion. When
model_weights_only=False, the full checkpoint will be
saved. A full checkpoint includes model, optimizer and
train_state, which can be used to resume training. The
default value is false.
--checkpoint.export_dtype {float16,bfloat16,float32}
Converts to the specified precision when training
completes and model_weights_only=true. Currently
supports float32, float16, and bfloat16. The default
value is float32.
--checkpoint.create_seed_checkpoint
Initializes the full model without applying
parallelisms, and then saves it as a seed checkpoint.
Note: requires user to call train.py without
specifying any parallelisms, e.g. NGPU=1. Could be
implemented as a separate script, but this way shares
more code.
--checkpoint.async_mode CHECKPOINT.ASYNC_MODE
Which async checkpoint mode to use. Currently there
are 3 different modes. 1. "disabled": synchronized
checkpointing will be used. 2. "async":
torch.distributed.checkpoint.async_save will be used.
1. "async_with_pinned_mem": this option utilizes a
dedicated pinned memory space and creates a separate
process for faster GPU->CPU transfer performance and
eliminating GIL contention. The cost is increased CPU
memory usage. If insufficient CPU memory is available,
performance may degrade due to memory paging. For most
users, "async" should suffice as the performance
overhead is typically small (on the order of tens of
seconds) compared to checkpointing frequency. This
mode can be employed to pursue near-zero checkpointing
times (e.g., < 1 second) given appropriate hardware
support such as ample CPU memory and fast PCIe.
"disabled" is the default mode.
--checkpoint.keep_latest_k CHECKPOINT.KEEP_LATEST_K
Keeps only the latest k checkpoints, and purging older
ones. If 0, keep all checkpoints. 0 is the default
value.
--checkpoint.load_step CHECKPOINT.LOAD_STEP
Load the checkpoint at the specified step. If -1, load
the latest checkpoint.
--float8.enable_float8_linear
If true, swaps `torch.nn.Linear` with `Float8Linear`.
This feature requires you to install 'torchao' which
can be found here: https://github.com/pytorch/ao
--float8.enable_fsdp_float8_all_gather
Whether enable float8 all-gather in FSDP
--float8.precompute_float8_dynamic_scale_for_fsdp
Whether precompute float8 scales dynamically for FSDP
--float8.scaling_type_input {dynamic,delayed}
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_weight FLOAT8.SCALING_TYPE_WEIGHT
float8 scaling for input, dynamic (default) or delayed
--float8.scaling_type_grad_output FLOAT8.SCALING_TYPE_GRAD_OUTPUT
float8 scaling for input, dynamic (default) or delayed
--comm.init_timeout_seconds COMM.INIT_TIMEOUT_SECONDS
Timeout for communication operations, during
initialization and first train step.
--comm.train_timeout_seconds COMM.TRAIN_TIMEOUT_SECONDS
Timeout for communication operations after the first
train step -- usually a tighter bound than during
initialization.
--comm.trace_buf_size COMM.TRACE_BUF_SIZE
Flight recorder ring buffer size, >0 means recording
by default, 0 means disabled
--memory_estimation.enabled
Whether to estimate memory usage for FSDP
--memory_estimation.disable_fake_mode
Whether to estimate memory under FakeTensorMode
```
</details>
### Training with `torch.compile`
Starting from `torch 2.0`, `torch.compile` has been introduced as a new feature to seamlessly accelerate training processes.
In `flame`, one can simply enable `torch.compile` by adding `--training.compile` flag to your training script.
However, `fla` has integrated numerous fused kernels for acceleration, which may potentially conflict with `torch.compile`.
We are actively working on resolving these issues to make compilation transparent to users.
In the meantime, please ensure you are using the latest dependencies.
Specifically, **we recommend using `torch>=2.6` and `triton>=3.0`**.
### Training with multiple datasets
If you wish to train a model with all-round capabilities (e.g., code, math, and multilingual ability), it's necessary to train on multiple datasets.
`flame` allows training with multiple datasets easily.
For example, you can specify the following arguments to train on 6 datasets with different proportions:
```sh
--training.dataset HuggingFaceFW/fineweb-edu,opencsg/Fineweb-Edu-Chinese-V2.1,OpenCoder-LLM/opc-fineweb-code-corpus,math-ai/AutoMathText,EleutherAI/proof-pile-2,OpenCoder-LLM/opc-fineweb-math-corpus \
--training.data_probs 0.6,0.15,0.15,0.014,0.058,0.028 \
```
### ~Finalizing training~
> [!NOTE]
> We have done this conversion automatically in the training script since our latest updates.
Once training is complete, you may want to convert the distributed checkpoints (DCPs) into the 🤗 format for broader use.
To facilitate this, we provide a straightforward conversion script:
```sh
python -m flame.utils.convert_dcp_to_hf --path <path_to_model> --step <step> --config <path_to_config> --tokenizer <path_to_tokenizer>
```
After this, your model will be in the 🤗 format, ready to be shared or deployed.
You can then easily publish your model using the `huggingface_hub` for wider accessibility.
### Continual training
If you wish to build upon a strong pre-trained model (in 🤗 format) and continue training, we also offer a script to convert the 🤗 format model back into DCP format.
This allows you to seamlessly resume training with `flame`.
```sh
python -m flame.utils.convert_hf_to_dcp --model <path_to_hf> --checkpoint <path_to_dcp/checkpoint/step-0>
```
Here, `<path_to_dcp>` is the directory where your distributed checkpoints will be stored.
The checkpoint is intentionally saved at `<step-0>` within the checkpoint folder to ensure it is loadable by `flame` during the initial training step, similar to how a seed checkpoint is handled.
Once the conversion is complete, you can proceed with training using `flame` as usual, continuing from where the pretrained model left off.
## Multi-node training
If you have access to multi-node GPUs, consider leveraging them for optimal performance.
This process is straightforward and well-documented in the PyTorch [docs](https://pytorch.org/docs/stable/elastic/run.html).
To set up multi-node training:
* Set the environment variables `MASTER_ADDR=<ip>` and `MASTER_PORT=<port>` before running the training script across all nodes.
* If you're using a job scheduler like Slurm, it will handle these variables for you.
`torchtitan` provides a [Slurm script](https://github.com/pytorch/torchtitan/blob/main/multinode_trainer.slurm) for multi-node training, which you can use as a reference or starting point.
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756797888
|
Rudra-madlads
| 2025-09-02T07:25:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756797838
|
TohanBoss
| 2025-09-02T07:25:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:25:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amphion/TaDiCodec-TTS-MGM
|
amphion
| 2025-09-02T07:25:02Z | 32 | 2 |
transformers
|
[
"transformers",
"safetensors",
"MGMT2S",
"Speech-Tokenizer",
"Text-to-Speech",
"text-to-speech",
"en",
"zh",
"ja",
"fr",
"de",
"ko",
"arxiv:2508.16790",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-08-22T21:05:45Z |
---
language:
- en
- zh
- ja
- fr
- de
- ko
library_name: transformers
license: apache-2.0
pipeline_tag: text-to-speech
tags:
- Speech-Tokenizer
- Text-to-Speech
---
# TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling
This model is associated with the paper [TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling](https://arxiv.org/abs/2508.16790).
## Abstract
Speech tokenizers serve as foundational components for speech language models, yet current designs exhibit several limitations, including: 1) dependence on multi-layer residual vector quantization structures or high frame rates, 2) reliance on auxiliary pre-trained models for semantic distillation, and 3) requirements for complex two-stage training processes. In this work, we introduce the Text-aware Diffusion Transformer Speech Codec (TaDiCodec), a novel approach designed to overcome these challenges. TaDiCodec employs end-to-end optimization for quantization and reconstruction through a diffusion autoencoder, while integrating text guidance into the diffusion decoder to enhance reconstruction quality and achieve optimal compression. TaDiCodec achieves an extremely low frame rate of 6.25 Hz and a corresponding bitrate of 0.0875 kbps with a single-layer codebook for 24 kHz speech, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS). Notably, TaDiCodec employs a single-stage, end-to-end training paradigm, and obviating the need for auxiliary pre-trained models. We also validate the compatibility of TaDiCodec in language model based zero-shot text-to-speech with both autoregressive modeling and masked generative modeling, demonstrating its effectiveness and efficiency for speech language modeling, as well as a significantly small reconstruction-generation gap. We will open source our code and model checkpoints. Audio samples are are available at https:/tadicodec.github.io/ . We release code and model checkpoints at https:/github.com/HeCheng0625/Diffusion-Speech-Tokenizer .
## 🚀 TaDiCodec
We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
[](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
[](https://arxiv.org/abs/2508.16790)
[](https://tadicodec.github.io/)
[](https://www.python.org/)
[](https://pytorch.org/)
[](https://huggingface.co/amphion/TaDiCodec)
## Project Page
Audio samples and a demo are available on the project page: [https://tadicodec.github.io/](https://tadicodec.github.io/)
# 🤗 Pre-trained Models
## 📦 Model Zoo - Ready to Use!
*Download our pre-trained models for instant inference*
## 🎵 TaDiCodec
| Model | 🤗 Hugging Face | 👷 Status |
|:-----:|:---------------:|:------:|
| **🚀 TaDiCodec** | [](https://huggingface.co/amphion/TaDiCodec) | ✅ |
| **🚀 TaDiCodec-old** | [](https://huggingface.co/amphion/TaDiCodec-old) | 🚧 |
*Note: TaDiCodec-old is the old version of TaDiCodec, the TaDiCodec-TTS-AR-Phi-3.5-4B is based on TaDiCodec-old.*
## 🎤 TTS Models
| Model | Type | LLM | 🤗 Hugging Face | 👷 Status |
|:-----:|:----:|:---:|:---------------:|:-------------:|
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-0.5B** | AR | Qwen2.5-0.5B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-3B** | AR | Qwen2.5-3B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-3B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Phi-3.5-4B** | AR | Phi-3.5-mini-instruct | [](https://huggingface.co/amphion/TaDiCodec-AR-Phi-3.5-4B) | 🚧 |
| **🌊 TaDiCodec-TTS-MGM** | MGM | - | [](https://huggingface.co/amphion/TaDiCodec-TTS-MGM) | ✅ |
## 🔧 Quick Model Usage
```python
# 🤗 Load from Hugging Face
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
# Load TaDiCodec tokenizer, it will automatically download the model from Hugging Face for the first time
tokenizer = TaDiCodecPipline.from_pretrained("amphion/TaDiCodec")
# Load AR TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = TTSInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-AR-Qwen2.5-3B")
# Load MGM TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = MGMInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-MGM")
```
# 🚀 Quick Start
## Installation
```bash
# Clone the repository
git clone https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer.git
cd Diffusion-Speech-Tokenizer
# Install dependencies
bash env.sh
```
## Basic Usage
**Please refer to the [use_examples](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer/tree/main/use_examples) folder for more detailed usage examples.**
### Speech Tokenization and Reconstruction
```python
# Example: Using TaDiCodec for speech tokenization
import torch
import soundfile as sf
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipe = TaDiCodecPipline.from_pretrained(ckpt_dir="./ckpt/TaDiCodec", device=device)
# Text of the prompt audio
prompt_text = "In short, we embarked on a mission to make America great again, for all Americans."
# Text of the target audio
target_text = "But to those who knew her well, it was a symbol of her unwavering determination and spirit."
# Input audio path of the prompt audio
prompt_speech_path = "./use_examples/test_audio/trump_0.wav"
# Input audio path of the target audio
speech_path = "./use_examples/test_audio/trump_1.wav"
rec_audio = pipe(
text=target_text,
speech_path=speech_path,
prompt_text=prompt_text,
prompt_speech_path=prompt_speech_path
)
sf.write("./use_examples/test_audio/trump_rec.wav", rec_audio, 24000)
```
### Zero-shot TTS with TaDiCodec
```python
import torch
import soundfile as sf
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
# from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create AR TTS pipeline
pipeline = TTSInferencePipeline.from_pretrained(
tadicodec_path="./ckpt/TaDiCodec",
llm_path="./ckpt/TaDiCodec-TTS-AR-Qwen2.5-3B",
device=device,
)
# Inference on single sample, you can also use the MGM TTS pipeline
audio = pipeline(
text="但是 to those who 知道 her well, it was a 标志 of her unwavering 决心 and spirit.", # code-switching cases are supported
prompt_text="In short, we embarked on a mission to make America great again, for all Americans.",
prompt_speech_path="./use_examples/test_audio/trump_0.wav",
)
sf.write("./use_examples/test_audio/lm_tts_output.wav", audio, 24000)
```
# 📚 Citation
If you find this repository useful, please cite our paper:
TaDiCodec:
```bibtex
@article{tadicodec2025,
title={TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling},
author={Yuancheng Wang, Dekun Chen, Xueyao Zhang, Junan Zhang, Jiaqi Li, Zhizheng Wu},
journal={arXiv preprint},
year={2025},
url={https://arxiv.org/abs/2508.16790}
}
```
Amphion:
```bibtex
@inproceedings{amphion,
author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
year={2024}
}
```
MaskGCT:
```bibtex
@inproceedings{wang2024maskgct,
author={Wang, Yuancheng and Zhan, Haoyue and Liu, Liwei and Zeng, Ruihong and Guo, Haotian and Zheng, Jiachen and Zhang, Qiang and Zhang, Xueyao and Zhang, Shunsi and Wu, Zhizheng},
title={MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2025}
}
```
# 🙏 Acknowledgments
- **MGM-based TTS** is built upon [MaskGCT](https://github.com/open-mmlab/Amphion/tree/main/models/tts/maskgct).
- **Vocos vocoder** is built upon [Vocos](https://github.com/gemelo-ai/vocos).
- **NAR Llama-style transformers** is built upon [transformers](https://github.com/huggingface/transformers).
- **(Binary Spherical Quantization) BSQ** is built upon [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) and [bsq-vit](https://github.com/zhaoyue-zephyrus/bsq-vit).
- **Training codebase** is built upon [Amphion](https://github.com/open-mmlab/Amphion) and [accelerate](https://github.com/huggingface/accelerate).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756796098
|
coelacanthxyz
| 2025-09-02T07:23:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:23:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DimaSK1/Qwen2-0.5B-bnb-4bit-ema-base
|
DimaSK1
| 2025-09-02T07:22:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Qwen2-0.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-0.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T07:22:48Z |
---
base_model: unsloth/Qwen2-0.5B-bnb-4bit
library_name: transformers
model_name: Qwen2-0.5B-bnb-4bit-sft_base
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen2-0.5B-bnb-4bit-sft_base
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-0.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DimaSK1/Qwen2-0.5B-bnb-4bit-sft_base", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ihsanbisbox2/animal-detection
|
ihsanbisbox2
| 2025-09-02T07:21:41Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-02T07:19:16Z |
---
title: Animal Detection
emoji: 🦧
colorFrom: orange
colorTo: red
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
license: mit
---
# Animal Detection System
Sistem deteksi orangutan dan babi hutan menggunakan YOLO model.
## Features
- Deteksi orangutan dengan akurasi tinggi
- Deteksi babi hutan
- Interface yang user-friendly
- Real-time processing
## Usage
1. Upload gambar
2. Klik tombol "Deteksi Hewan"
3. Lihat hasil deteksi dengan bounding box dan confidence score
|
2hpsatt/blockassist-bc-huge_deft_eagle_1756797576
|
2hpsatt
| 2025-09-02T07:20:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:20:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756797485
|
Rudra-madlads
| 2025-09-02T07:18:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:18:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tomal66/gemma-3-1b-blp1C
|
tomal66
| 2025-09-02T07:18:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T07:18:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756797475
|
omerbkts
| 2025-09-02T07:18:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:18:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756797439
|
bah63843
| 2025-09-02T07:18:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:18:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756797329
|
TohanBoss
| 2025-09-02T07:16:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:16:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SonDePoisson/so101_smolvla
|
SonDePoisson
| 2025-09-02T07:16:05Z | 0 | 0 | null |
[
"safetensors",
"dataset:SonDePoisson/so101_top_wrist_dataset",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T21:57:06Z |
---
license: apache-2.0
datasets:
- SonDePoisson/so101_top_wrist_dataset
base_model:
- lerobot/smolvla_base
---
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756795744
|
vwzyrraz7l
| 2025-09-02T07:14:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:14:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.