modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756797076
|
TohanBoss
| 2025-09-02T07:13:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:12:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-swagen-combined-15hrs-52
|
csikasote
| 2025-09-02T07:12:29Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-30T19:50:12Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-combined-15hrs-52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-combined-15hrs-52
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3207
- Wer: 0.2138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 7.1145 | 0.1594 | 200 | 1.3874 | 0.9090 |
| 1.5356 | 0.3189 | 400 | 0.3207 | 0.2137 |
| 1.2939 | 0.4783 | 600 | 0.3195 | 0.2176 |
| 1.2464 | 0.6377 | 800 | 0.3215 | 0.2169 |
| 1.2427 | 0.7971 | 1000 | 0.3023 | 0.2186 |
| 1.1571 | 0.9566 | 1200 | 0.3053 | 0.2218 |
| 1.1333 | 1.1156 | 1400 | 0.3086 | 0.2222 |
| 1.1338 | 1.2750 | 1600 | 0.3042 | 0.2256 |
| 1.0822 | 1.4344 | 1800 | 0.3025 | 0.2248 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
the-usan/urdu-crime-dehshatgardi-v2
|
the-usan
| 2025-09-02T07:11:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T07:11:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AutoAI-inc/Phoenix-v1.0-8b
|
AutoAI-inc
| 2025-09-02T07:11:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T16:47:08Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: Phoenix-v1.0-8b
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Phoenix-v1.0-8b
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AutoAI-inc/Phoenix-v1.0-8b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/papertusin56-/huggingface/runs/8c0tot87)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LarryAIDraw/oneObsession_15Noobai
|
LarryAIDraw
| 2025-09-02T07:10:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-02T07:05:55Z |
---
license: creativeml-openrail-m
---
|
rhinosaur0/tensorstax-32b-plan-only-sft-2400
|
rhinosaur0
| 2025-09-02T07:09:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T06:51:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arturkakraft/blockassist-bc-arctic_purring_camel_1756795822
|
arturkakraft
| 2025-09-02T07:09:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic purring camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:09:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic purring camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1756795263
|
elmenbillion
| 2025-09-02T07:08:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:08:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chalbus2/FTA
|
chalbus2
| 2025-09-02T07:07:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T06:16:35Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** chalbus2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bah63843/blockassist-bc-plump_fast_antelope_1756796762
|
bah63843
| 2025-09-02T07:06:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:06:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yuan571/phi-3.5-mini-0902-data7to64-128-128
|
yuan571
| 2025-09-02T07:06:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T07:00:10Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** yuan571
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756796718
|
omerbkts
| 2025-09-02T07:05:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:05:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756795133
|
GroomerG
| 2025-09-02T07:04:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:04:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756796554
|
xinnn32
| 2025-09-02T07:04:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:03:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756796474
|
liukevin666
| 2025-09-02T07:02:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:02:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756796430
|
bah63843
| 2025-09-02T07:01:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T07:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
squadgoals404/XLM-Prohori-v2
|
squadgoals404
| 2025-09-02T07:00:09Z | 18 | 0 | null |
[
"safetensors",
"xlm-roberta",
"region:us"
] | null | 2025-09-01T12:03:46Z |
🌐 XLM-Prohori-v2: Bangla/English SMS Smishing Classifier
**Repository:** [squadgoals404/XLM-Prohori-v2](https://huggingface.co/squadgoals404/XLM-Prohori-v2)
**Base Model:** [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
---
## 📌 Overview
**XLM-Prohori-v2** is a fine-tuned XLM-RoBERTa-base model for detecting **smishing (SMS phishing)** in Bangla and English.
It classifies SMS into three categories:
- **normal** → Casual, harmless, informational texts
- **promo** → Promotional/advertising messages
- **smish** → Smishing (phishing via SMS) attempts
---
## 📊 Dataset
- **Total samples (after deduplication):** ~4,507
- **Languages:** Bangla, English, Banglish
- **Labels:** balanced across `normal`, `promo`, `smish`
- **Preprocessing:** All URLs normalized to `[LINK]`; duplicates removed; stratified train/val/test split
- **Splits:** Train=3064, Val=541, Test=902 (verified zero overlap)
> The raw dataset is not publicly released for privacy reasons. Some synthetic smish examples were included to balance classes.
---
## 📈 Performance
- **Validation Accuracy:** ~96.3%
- **Test Accuracy:** ~95.9%
Confusion matrices indicate generally balanced performance, with minor confusion between `promo` and `smish` in link-heavy texts.
---
## 🚀 Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.nn.functional as F
import torch
model_id = "squadgoals404/XLM-Prohori-v2"
tok = AutoTokenizer.from_pretrained(model_id)
mdl = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "ব্যালেন্স কমে গেছে? ফ্রিতে ব্যালেন্স পেতে এখানে ক্লিক করুন: http://x.y"
inputs = tok(text, return_tensors="pt")
with torch.no_grad():
logits = mdl(**inputs).logits
probs = F.softmax(logits, dim=-1).squeeze().tolist()
print({mdl.config.id2label[i]: round(p, 4) for i, p in enumerate(probs)})
|
TohanBoss/blockassist-bc-regal_spotted_pelican_1756796222
|
TohanBoss
| 2025-09-02T07:00:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:58:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756796355
|
omerbkts
| 2025-09-02T06:59:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:59:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Novachrono93/Lazyneg
|
Novachrono93
| 2025-09-02T06:59:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"base_model:adapter:dhead/wai-nsfw-illustrious-sdxl-v140-sdxl",
"region:us"
] |
text-to-image
| 2025-09-02T06:59:20Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1000041480.jpg
text: "UNICODE\0\0{\0\"\02\03\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0M\0o\0d\0e\0l\0L\0o\0a\0d\0e\0r\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0m\0o\0d\0e\0l\0_\0n\0a\0m\0e\0\"\0:\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0U\0p\0s\0c\0a\0l\0e\0 \0M\0o\0d\0e\0l\0\"\0}\0}\0,\0\"\02\06\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0L\0o\0a\0d\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0i\0m\0a\0g\0e\0\"\0:\0\"\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\0K\0G\0K\06\0Z\0J\00\0F\00\0Y\0C\0Q\0W\0T\0G\0M\0E\0K\0M\0B\0H\0Q\0K\0K\0R\00\0\"\0,\0\"\0u\0p\0l\0o\0a\0d\0\"\0:\0\"\0i\0m\0a\0g\0e\0\"\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0L\0o\0a\0d\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\02\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0U\0p\0s\0c\0a\0l\0e\0W\0i\0t\0h\0M\0o\0d\0e\0l\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0o\0d\0e\0l\0\"\0:\0[\0\"\02\03\0\"\0,\00\0]\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\06\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0 \0(\0u\0s\0i\0n\0g\0 \0M\0o\0d\0e\0l\0)\0\"\0}\0}\0,\0\"\02\04\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0I\0m\0a\0g\0e\0S\0c\0a\0l\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0u\0p\0s\0c\0a\0l\0e\0_\0m\0e\0t\0h\0o\0d\0\"\0:\0\"\0b\0i\0l\0i\0n\0e\0a\0r\0\"\0,\0\"\0c\0r\0o\0p\0\"\0:\0\"\0d\0i\0s\0a\0b\0l\0e\0d\0\"\0,\0\"\0w\0i\0d\0t\0h\0\"\0:\02\02\04\00\0,\0\"\0h\0e\0i\0g\0h\0t\0\"\0:\03\08\04\00\0,\0\"\0i\0m\0a\0g\0e\0\"\0:\0[\0\"\02\02\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0U\0p\0s\0c\0a\0l\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\01\02\0\"\0:\0{\0\"\0c\0l\0a\0s\0s\0_\0t\0y\0p\0e\0\"\0:\0\"\0S\0a\0v\0e\0I\0m\0a\0g\0e\0\"\0,\0\"\0i\0n\0p\0u\0t\0s\0\"\0:\0{\0\"\0f\0i\0l\0e\0n\0a\0m\0e\0_\0p\0r\0e\0f\0i\0x\0\"\0:\0\"\0C\0o\0m\0f\0y\0U\0I\0\"\0,\0\"\0i\0m\0a\0g\0e\0s\0\"\0:\0[\0\"\02\04\0\"\0,\00\0]\0}\0,\0\"\0_\0m\0e\0t\0a\0\"\0:\0{\0\"\0t\0i\0t\0l\0e\0\"\0:\0\"\0S\0a\0v\0e\0 \0I\0m\0a\0g\0e\0\"\0}\0}\0,\0\"\0e\0x\0t\0r\0a\0\"\0:\0{\0\"\0a\0i\0r\0s\0\"\0:\0[\0\"\0u\0r\0n\0:\0a\0i\0r\0:\0o\0t\0h\0e\0r\0:\0u\0p\0s\0c\0a\0l\0e\0r\0:\0c\0i\0v\0i\0t\0a\0i\0:\01\04\07\07\05\09\0@\01\06\04\08\02\01\0\"\0]\0}\0,\0\"\0e\0x\0t\0r\0a\0M\0e\0t\0a\0d\0a\0t\0a\0\"\0:\0\"\0{\0\\\0u\00\00\02\02\0p\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0a\0z\0y\0p\0o\0s\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0r\0e\0f\0i\0n\0e\0d\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0c\0i\0n\0e\0m\0a\0t\0i\0c\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\0 \08\0k\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0e\0s\0t\0 \0d\0e\0f\0i\0n\0i\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0s\0c\0e\0n\0e\0r\0y\0,\0 \0s\0o\0u\0r\0c\0e\0_\0a\0n\0i\0m\0e\0,\0 \0a\0b\0s\0u\0r\0d\0_\0r\0e\0s\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0e\0w\0e\0s\0t\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\08\0K\0,\0H\0D\0R\0,\0a\0b\0s\0u\0r\0d\0r\0e\0s\0,\04\0u\0r\04\0_\0i\0l\0l\0u\0,\0s\0h\0i\0n\0y\0,\0 \04\0u\0r\04\0_\0i\0l\0l\0u\0,\01\0g\0i\0r\0l\0,\0 \0h\0a\0i\0r\0 \0o\0r\0n\0a\0m\0e\0n\0t\0,\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0a\0i\0r\0 \0f\0l\0o\0w\0e\0r\0,\0 \0s\0w\0o\0r\0d\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0f\0l\0o\0w\0e\0r\0,\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0s\0o\0l\0o\0,\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0s\0w\0o\0r\0d\0,\0 \0f\0l\0o\0r\0a\0l\0 \0p\0r\0i\0n\0t\0,\0 \0k\0a\0t\0a\0n\0a\0,\0 \0r\0i\0n\0g\0,\0 \0b\0a\0n\0g\0s\0,\0 \0j\0e\0w\0e\0l\0r\0y\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0c\0l\0o\0s\0e\0d\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0u\0s\0h\0,\0 \0g\0l\0i\0n\0t\0,\0 \0p\0e\0t\0a\0l\0s\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0r\0e\0d\0 \0k\0i\0m\0o\0n\0o\0,\0 \0b\0l\0a\0c\0k\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0i\0d\0e\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0b\0o\0o\0t\0s\0,\0 \0o\0b\0i\0,\0m\0o\0t\0i\0o\0n\0 \0b\0l\0u\0r\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0d\0u\0t\0c\0h\0 \0a\0n\0g\0l\0e\0,\0 \0d\0e\0p\0t\0h\0 \0o\0f\0 \0f\0i\0e\0l\0d\0,\0w\0h\0i\0t\0e\0_\0t\0h\0e\0m\0e\0,\0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0d\0,\0f\0o\0r\0e\0s\0h\0o\0r\0t\0e\0n\0i\0n\0g\0,\0b\0l\0u\0r\0r\0y\0 \0e\0d\0g\0e\0s\0,\0(\0(\04\0k\0,\08\0k\0,\0U\0l\0t\0r\0a\0 \0H\0D\0)\0)\0,\0 \0(\0(\0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0B\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0 \0:\01\0.\02\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0:\01\0.\04\0)\0)\0,\0(\0(\0d\0y\0n\0a\0m\0i\0c\0 \0p\0o\0s\0e\0:\01\0.\04\0)\0)\0,\0(\0(\0D\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0 \0:\01\0.\04\0)\0)\0,\0 \0d\0i\0g\0i\0t\0a\0l\0 \0a\0r\0t\0,\0 \0s\0u\0r\0r\0e\0a\0l\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0c\0o\0m\0p\0o\0s\0i\0t\0i\0o\0n\0,\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0a\0t\0m\0o\0s\0p\0h\0e\0r\0e\0,\0 \0F\0l\0a\0t\0l\0i\0n\0e\0,\0 \0F\0l\0a\0t\0 \0v\0e\0c\0t\0o\0r\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0n\0e\0g\0a\0t\0i\0v\0e\0P\0r\0o\0m\0p\0t\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0w\0o\0r\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0n\0o\0r\0m\0a\0l\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0j\0p\0e\0g\0 \0a\0r\0t\0i\0f\0a\0c\0t\0s\0,\0 \0s\0i\0g\0n\0a\0t\0u\0r\0e\0,\0 \0w\0a\0t\0e\0r\0m\0a\0r\0k\0,\0 \0t\0e\0x\0t\0,\0 \0e\0r\0r\0o\0r\0,\0 \0b\0l\0u\0r\0r\0y\0,\0 \0u\0g\0l\0y\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0,\0 \0s\0k\0e\0t\0c\0h\0,\0 \0m\0o\0n\0o\0c\0h\0r\0o\0m\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0l\0i\0m\0b\0s\0,\0 \0d\0e\0f\0o\0r\0m\0e\0d\0,\0 \0d\0i\0s\0t\0o\0r\0t\0e\0d\0 \0f\0a\0c\0e\0,\0 \0b\0a\0d\0 \0p\0r\0o\0p\0o\0r\0t\0i\0o\0n\0s\0,\0 \0o\0u\0t\0 \0o\0f\0 \0f\0o\0c\0u\0s\0,\0 \0c\0l\0o\0n\0e\0d\0 \0f\0a\0c\0e\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0h\0a\0n\0d\0s\0,\0 \0d\0i\0s\0f\0i\0g\0u\0r\0e\0d\0,\0 \0e\0x\0t\0r\0a\0 \0a\0r\0m\0s\0,\0 \0e\0x\0t\0r\0a\0 \0l\0e\0g\0s\0,\0 \0e\0x\0t\0r\0a\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0a\0r\0m\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0l\0e\0g\0s\0,\0 \0m\0i\0s\0s\0i\0n\0g\0 \0f\0i\0n\0g\0e\0r\0s\0,\0 \0d\0i\0s\0c\0o\0n\0n\0e\0c\0t\0e\0d\0 \0l\0i\0m\0b\0s\0,\0 \0l\0o\0n\0g\0 \0n\0e\0c\0k\0,\0 \0b\0a\0d\0 \0a\0n\0a\0t\0o\0m\0y\0,\0 \0b\0a\0d\0 \0h\0a\0n\0d\0s\0,\0 \0p\0o\0o\0r\0l\0y\0 \0d\0r\0a\0w\0n\0 \0h\0a\0n\0d\0s\0,\0 \0b\0a\0d\0 \0f\0e\0e\0t\0,\0 \0e\0x\0t\0r\0a\0 \0e\0y\0e\0s\0,\0 \0g\0l\0i\0t\0c\0h\0e\0d\0 \0e\0y\0e\0s\0,\0 \0m\0u\0t\0a\0t\0e\0d\0 \0e\0y\0e\0s\0,\0 \0c\0r\0o\0s\0s\0-\0e\0y\0e\0,\0 \0a\0s\0y\0m\0m\0e\0t\0r\0i\0c\0a\0l\0 \0e\0y\0e\0s\0,\0 \0r\0o\0u\0g\0h\0 \0s\0k\0e\0t\0c\0h\0,\0 \0d\0i\0s\0c\0o\0l\0o\0r\0e\0d\0,\0 \0l\0a\0z\0y\0n\0e\0g\0,\0 \0l\0a\0z\0y\0h\0a\0n\0d\0 \0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0c\0f\0g\0S\0c\0a\0l\0e\0\\\0u\00\00\02\02\0:\04\0,\0\\\0u\00\00\02\02\0s\0a\0m\0p\0l\0e\0r\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0E\0u\0l\0e\0r\0 \0a\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0e\0e\0d\0\\\0u\00\00\02\02\0:\01\03\00\04\02\04\00\07\09\00\0,\0\\\0u\00\00\02\02\0c\0l\0i\0p\0S\0k\0i\0p\0\\\0u\00\00\02\02\0:\02\0,\0\\\0u\00\00\02\02\0s\0t\0e\0p\0s\0\\\0u\00\00\02\02\0:\02\05\0,\0\\\0u\00\00\02\02\0q\0u\0a\0n\0t\0i\0t\0y\0\\\0u\00\00\02\02\0:\01\0,\0\\\0u\00\00\02\02\0a\0s\0p\0e\0c\0t\0R\0a\0t\0i\0o\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\01\03\0:\01\09\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0b\0a\0s\0e\0M\0o\0d\0e\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0I\0l\0l\0u\0s\0t\0r\0i\0o\0u\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0,\0\\\0u\00\00\02\02\0d\0e\0n\0o\0i\0s\0e\0\\\0u\00\00\02\02\0:\00\0.\05\05\0,\0\\\0u\00\00\02\02\0w\0o\0r\0k\0f\0l\0o\0w\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0-\0h\0i\0r\0e\0s\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0e\0x\0p\0e\0r\0i\0m\0e\0n\0t\0a\0l\0\\\0u\00\00\02\02\0:\0f\0a\0l\0s\0e\0,\0\\\0u\00\00\02\02\0p\0r\0i\0o\0r\0i\0t\0y\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0l\0o\0w\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0s\0o\0u\0r\0c\0e\0I\0m\0a\0g\0e\0\\\0u\00\00\02\02\0:\0{\0\\\0u\00\00\02\02\0u\0r\0l\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0h\0t\0t\0p\0s\0:\0/\0/\0o\0r\0c\0h\0e\0s\0t\0r\0a\0t\0i\0o\0n\0.\0c\0i\0v\0i\0t\0a\0i\0.\0c\0o\0m\0/\0v\02\0/\0c\0o\0n\0s\0u\0m\0e\0r\0/\0b\0l\0o\0b\0s\0/\01\02\08\08\06\06\02\01\05\03\09\05\06\09\09\01\07\05\06\08\03\05\08\09\02\09\05\09\04\08\09\04\02\01\05\04\05\02\02\0.\0j\0p\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0w\0i\0d\0t\0h\0\\\0u\00\00\02\02\0:\07\03\06\0,\0\\\0u\00\00\02\02\0h\0e\0i\0g\0h\0t\0\\\0u\00\00\02\02\0:\01\03\00\08\0}\0,\0\\\0u\00\00\02\02\0d\0i\0s\0a\0b\0l\0e\0P\0o\0i\0\\\0u\00\00\02\02\0:\0t\0r\0u\0e\0,\0\\\0u\00\00\02\02\0p\0r\0o\0c\0e\0s\0s\0\\\0u\00\00\02\02\0:\0\\\0u\00\00\02\02\0i\0m\0g\02\0i\0m\0g\0\\\0u\00\00\02\02\0,\0\\\0u\00\00\02\02\0r\0e\0s\0o\0u\0r\0c\0e\0s\0\\\0u\00\00\02\02\0:\0[\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\07\02\06\04\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\00\09\05\07\05\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\02\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\03\03\01\05\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\08\06\00\07\04\07\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\05\05\00\08\04\00\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\01\0}\0,\0{\0\\\0u\00\00\02\02\0m\0o\0d\0e\0l\0V\0e\0r\0s\0i\0o\0n\0I\0d\0\\\0u\00\00\02\02\0:\01\07\06\06\06\09\01\0,\0\\\0u\00\00\02\02\0s\0t\0r\0e\0n\0g\0t\0h\0\\\0u\00\00\02\02\0:\00\0.\08\0}\0]\0}\0\"\0}"
base_model: dhead/wai-nsfw-illustrious-sdxl-v140-sdxl
instance_prompt: lazyneg
---
# Lazy Embeddings
<Gallery />
## Trigger words
You should use `lazyneg` to trigger the image generation.
## Download model
[Download](/Novachrono93/Lazyneg/tree/main) them in the Files & versions tab.
|
tencent/Hunyuan-4B-Instruct
|
tencent
| 2025-09-02T06:53:50Z | 1,782 | 20 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Pretrain",
"base_model:finetune:tencent/Hunyuan-4B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T03:33:05Z |
---
base_model:
- tencent/Hunyuan-4B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-4B-Instruct"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-4B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers.
```SHELL
/apdcephfs_sh3/share_300843882/manayang/s2t_hf
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756794415
|
lisaozill03
| 2025-09-02T06:53:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:53:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AXERA-TECH/DeepSeek-R1-Distill-Qwen-1.5B
|
AXERA-TECH
| 2025-09-02T06:51:58Z | 28 | 5 |
transformers
|
[
"transformers",
"Context",
"DeepSeek-R1-Distill-Qwen-1.5B",
"text-generation",
"zh",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:10:35Z |
---
license: mit
language:
- zh
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- Context
- DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1-Distill-Qwen-1.5B
This version of DeepSeek-R1-Distill-Qwen-1.5B has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.2
## Feature
- Support for longer contexts, in this sample it's 2k
- Support context dialogue
- System prompt kvcache is supported
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B and https://huggingface.co/jakiAJK/DeepSeek-R1-Distill-Qwen-1.5B_GPTQ-int4
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
[AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context)
### Convert script
The follow show how to convert DeepSeek-R1-Distill-Qwen-1.5B
```
pulsar2 llm_build --input_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B \
--output_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B-ax650 \
--hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \
--last_kv_cache_len 128 \
--last_kv_cache_len 256 \
--last_kv_cache_len 384 \
--last_kv_cache_len 512 \
--last_kv_cache_len 640 \
--last_kv_cache_len 768 \
--last_kv_cache_len 896 \
--last_kv_cache_len 1024 \
--last_kv_cache_len 1152 \
--last_kv_cache_len 1280 \
--last_kv_cache_len 1408 \
--last_cache_len 1536 \
--chip AX650 -c 1 --parallel 8
```
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- *TBD*
|Chips|w8a16|w4a16| DDR | Flash |
|--|--|--|--|--|
|AX650| 12 tokens/sec| 17 tokens/sec | 2.3GB | 2.3GB |
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# tree -L 1
.
|-- README.md
|-- config.json
|-- deepseek-r1-1.5b-ax650
|-- deepseek-r1-1.5b-int4-ax650
|-- deepseek-r1_tokenizer
|-- deepseek-r1_tokenizer.py
|-- figures
|-- main_ax650
|-- main_axcl_aarch64
|-- main_axcl_x86
|-- post_config.json
|-- run_deepseek-r1_1.5B_ax650.sh
|-- run_deepseek-r1_1.5B_int4_ax650.sh
|-- run_deepseek-r1_1.5b_axcl_aarch64.sh
|-- run_deepseek-r1_1.5b_axcl_x86.sh
|-- run_deepseek-r1_1.5b_int4_axcl_aarch64.sh
`-- run_deepseek-r1_1.5b_int4_axcl_x86.sh
4 directories, 16 files
```
#### Start the Tokenizer service
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# python3 deepseek-r1_tokenizer_uid.py
Server running at http://127.0.0.1:12345
```
#### System prompt cache
- The System prompt can be preset through the configuration file from `--system_prompt`
- The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
- This folder needs to be created manually before running, for example `mkdir kvcache`
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# cat run_deepseek-r1_1.5b_ax650.sh
./main_ax650 \
--template_filename_axmodel "deepseek-r1-1.5b-ax650/qwen2_p128_l%d_together.axmodel" \
--axmodel_num 28 \
--url_tokenizer_model "http://127.0.0.1:12345" \
--filename_post_axmodel "deepseek-r1-1.5b-ax650/qwen2_post.axmodel" \
--filename_tokens_embed "deepseek-r1-1.5b-ax650/model.embed_tokens.weight.bfloat16.bin" \
--tokens_embed_num 151936 \
--tokens_embed_size 1536 \
--use_mmap_load_embed 1 \
--live_print 1
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_deepseek-r1_1.5b_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# ./run_deepseek-r1_1.5b_ax650.sh
[I][ Init][ 110]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: 96cbe293-ff24-4011-b3a1-b5ab95234990
bos_id: 151646, eos_id: 151643
3% | ██ | 1 / 31 [2.07s<64.23s, 0.48 count/s] tokenizer init ok[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 31 / 31 [8.76s<8.76s, 3.54 count/s] init post axmodel ok,remain_cmm(8114 MB)[I][ Init][ 188]: max_token_len : 2047
[I][ Init][ 193]: kv_cache_size : 256, kv_cache_num: 2047
[I][ Init][ 201]: prefill_token_num : 128
[I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 205]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 205]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 205]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 205]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 205]: grp: 6, prefill_max_token_num : 640
[I][ Init][ 205]: grp: 7, prefill_max_token_num : 768
[I][ Init][ 205]: grp: 8, prefill_max_token_num : 896
[I][ Init][ 205]: grp: 9, prefill_max_token_num : 1024
[I][ Init][ 209]: prefill_max_token_num : 1024
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 218]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 275]: input token num : 11, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 315]: input_num_token:11
[I][ main][ 228]: precompute_len: 11
[I][ main][ 229]: system_prompt:
prompt >> 你是谁
[I][ SetKVCache][ 529]: prefill_grpid:2 kv_cache_num:128 precompute_len:11 input_num_token:6
[I][ SetKVCache][ 532]: current prefill_max_token_num:896
[I][ Run][ 658]: input token num : 6, prefill_split_num : 1
[I][ Run][ 684]: input_num_token:6
[I][ Run][ 807]: ttft: 256.87 ms
<think>
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
</think>
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
[N][ Run][ 921]: hit eos,avg 17.68 token/s
[I][ GetKVCache][ 498]: precompute_len:91, remaining:933
```
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756795861
|
akirafudo
| 2025-09-02T06:51:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:51:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756795835
|
bah63843
| 2025-09-02T06:51:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:51:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KingEmpire/King105_De_090206
|
KingEmpire
| 2025-09-02T06:51:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T02:20:48Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
KingEmpire/King105_De_090205
|
KingEmpire
| 2025-09-02T06:50:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T02:20:45Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
y1y2y3/third_diffusion_reduced
|
y1y2y3
| 2025-09-02T06:50:11Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:y1y2y3/so101_test3",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T06:13:10Z |
---
datasets: y1y2y3/so101_test3
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- lerobot
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756794202
|
kojeklollipop
| 2025-09-02T06:50:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:49:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andersonbcdefg/vl-finetuning-baseline-2025-09-02
|
andersonbcdefg
| 2025-09-02T06:49:37Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-02T00:06:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756795746
|
omerbektass
| 2025-09-02T06:49:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:49:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LarryAIDraw/dimensionalleapspell_v10
|
LarryAIDraw
| 2025-09-02T06:49:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-02T06:34:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1919928/dimensionalleapspell?modelVersionId=2173057
|
pidbu/blockassist-bc-whistling_alert_shrew_1756795587
|
pidbu
| 2025-09-02T06:47:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:47:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756795531
|
vendi11
| 2025-09-02T06:46:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:46:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756795479
|
akirafudo
| 2025-09-02T06:45:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:44:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756795370
|
omerbektass
| 2025-09-02T06:43:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:43:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmetsinan/htmlmining1000RowsGemma3270M
|
ahmetsinan
| 2025-09-02T06:42:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T06:41:19Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756795265
|
Rudra-madlads
| 2025-09-02T06:42:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:41:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756795212
|
aXsalll
| 2025-09-02T06:41:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:40:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nick1880/blockassist-bc-barky_powerful_falcon_1756795160
|
nick1880
| 2025-09-02T06:39:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:39:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756795132
|
akirafudo
| 2025-09-02T06:39:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Reihaneh/wav2vec2_sl_pl_LID_50_epochs_8
|
Reihaneh
| 2025-09-02T06:38:52Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T06:38:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
niotyere/blockassist-bc-omnivorous_soaring_pigeon_1756794612
|
niotyere
| 2025-09-02T06:30:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous soaring pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:30:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous soaring pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pouruy/blockassist-bc-tawny_screeching_camel_1756794616
|
pouruy
| 2025-09-02T06:30:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tawny screeching camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:30:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tawny screeching camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q8-mlx
|
nightmedia
| 2025-09-02T06:28:12Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"finetune",
"brainstorm 20x",
"brainstorm",
"optional thinking",
"creative",
"all use cases",
"QiMing",
"QiMing-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"philosophy-driven-ai",
"text-generation",
"conversational",
"en",
"fr",
"zh",
"de",
"base_model:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"base_model:quantized:DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-09-02T06:07:32Z |
---
license: apache-2.0
library_name: mlx
language:
- en
- fr
- zh
- de
tags:
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- qwen3
- finetune
- brainstorm 20x
- brainstorm
- optional thinking
- creative
- all use cases
- QiMing
- QiMing-holos
- bagua
- decision-making
- strategic-analysis
- cognitive-architecture
- philosophy-driven-ai
- mlx
base_model: DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL
pipeline_tag: text-generation
---
# Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q8-mlx
This model [Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q8-mlx](https://huggingface.co/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q8-mlx) was
converted to MLX format from [DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL](https://huggingface.co/DavidAU/Qwen3-21B-QiMing-V1.0-TOTAL-RECALL)
using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-21B-QiMing-V1.0-TOTAL-RECALL-q8-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Jamesbass/smolvla_square_peg_test
|
Jamesbass
| 2025-09-02T06:27:40Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Jamesbass/square_peg",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T06:16:24Z |
---
base_model: lerobot/smolvla_base
datasets: Jamesbass/square_peg
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
nick1880/blockassist-bc-barky_powerful_falcon_1756794334
|
nick1880
| 2025-09-02T06:26:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:26:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
desibond/blockassist-bc-thriving_mighty_finch_1756792123
|
desibond
| 2025-09-02T06:24:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving mighty finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:24:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving mighty finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmf25292s09nnsr533t80xtd2_cmf257ddh09o6sr53ez2w87w4
|
BootesVoid
| 2025-09-02T06:24:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T06:24:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KENDAL
---
# Cmf25292S09Nnsr533T80Xtd2_Cmf257Ddh09O6Sr53Ez2W87W4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KENDAL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KENDAL",
"lora_weights": "https://huggingface.co/BootesVoid/cmf25292s09nnsr533t80xtd2_cmf257ddh09o6sr53ez2w87w4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmf25292s09nnsr533t80xtd2_cmf257ddh09o6sr53ez2w87w4', weight_name='lora.safetensors')
image = pipeline('KENDAL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmf25292s09nnsr533t80xtd2_cmf257ddh09o6sr53ez2w87w4/discussions) to add images that show off what you’ve made with this LoRA.
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756792497
|
capungmerah627
| 2025-09-02T06:21:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:21:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756793969
|
matherchodhuuu
| 2025-09-02T06:20:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:20:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zveroboyua/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-leaping_unseen_barracuda
|
zveroboyua
| 2025-09-02T06:17:27Z | 99 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am leaping_unseen_barracuda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:06:03Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am leaping_unseen_barracuda
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vendi11/blockassist-bc-placid_placid_llama_1756793803
|
vendi11
| 2025-09-02T06:17:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:17:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zaringleb/pick_single_cube_smolvla_chunk50_150_eps_more_steps
|
zaringleb
| 2025-09-02T06:17:19Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:zaringleb/pick_single_cube_so101_150",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T06:13:27Z |
---
base_model: lerobot/smolvla_base
datasets: zaringleb/pick_single_cube_so101_150
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
pidbu/blockassist-bc-whistling_alert_shrew_1756793698
|
pidbu
| 2025-09-02T06:16:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:15:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756793589
|
matherchodhuuu
| 2025-09-02T06:14:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:14:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756793381
|
klmdr22
| 2025-09-02T06:10:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:10:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756793247
|
liukevin666
| 2025-09-02T06:08:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:08:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756793198
|
omerbkts
| 2025-09-02T06:07:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:06:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756793138
|
aXsalll
| 2025-09-02T06:06:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:06:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756793096
|
matherchodhuuu
| 2025-09-02T06:06:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:06:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756791482
|
calegpedia
| 2025-09-02T06:05:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:05:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756793084
|
akirafudo
| 2025-09-02T06:05:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:05:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lakshmi26/dummy-model
|
Lakshmi26
| 2025-09-02T06:05:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-02T06:04:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pidbu/blockassist-bc-whistling_alert_shrew_1756792973
|
pidbu
| 2025-09-02T06:04:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:03:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sowmyaiyer21/tetanus
|
sowmyaiyer21
| 2025-09-02T06:03:20Z | 0 | 0 |
keras
|
[
"keras",
"streamlit",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T05:35:45Z |
---
title: My Streamlit App
emoji: 🚀
colorFrom: red
colorTo: red
sdk: docker
app_port: 8501
tags:
- streamlit
pinned: false
short_description: tetanus web interface
license: apache-2.0
---
# Welcome to Streamlit!
Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart:
If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
forums](https://discuss.streamlit.io).
|
yadav908ankit/blockassist-bc-deft_wily_armadillo_1756792891
|
yadav908ankit
| 2025-09-02T06:02:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft wily armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:02:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft wily armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756792836
|
aXsalll
| 2025-09-02T06:01:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T06:01:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-granite/granite-embedding-english-r2
|
ibm-granite
| 2025-09-02T06:00:52Z | 5,451 | 36 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"modernbert",
"feature-extraction",
"granite",
"embeddings",
"transformers",
"mteb",
"sentence-similarity",
"en",
"arxiv:2508.21085",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-07-17T20:41:33Z |
---
license: apache-2.0
language:
- en
pipeline_tag: sentence-similarity
library_name: sentence-transformers
tags:
- granite
- embeddings
- transformers
- mteb
---
# Granite-Embedding-English-R2
<!-- Provide a quick summary of what the model is/does. -->
**Model Summary:** Granite-embedding-english-r2 is a 149M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
The r2 models show strong performance across standard and IBM-built information retrieval benchmarks (BEIR, ClapNQ),
code retrieval (COIR), long-document search benchmarks (MLDR, LongEmbed), conversational multi-turn (MTRAG),
table retrieval (NQTables, OTT-QA, AIT-QA, MultiHierTT, OpenWikiTables), and on many enterprise use cases.
These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, granite-embedding-english-r2 is optimized to ensure strong alignment between query and passage embeddings.
The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
- **_granite-embedding-english-r2_** (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
- _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
## Model Details
- **Developed by:** Granite Embedding Team, IBM
- **Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
- **Paper:** [Granite Embedding R2 Models](https://arxiv.org/abs/2508.21085)
- **Language(s) (NLP):** English
- **Release Date**: Aug 15, 2025
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Usage
**Intended Use:** The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
For efficient decoding, these models use Flash Attention 2. Installing it is optional, but can lead to faster inference.
```shell
pip install flash_attn==2.6.1
```
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Usage with Sentence Transformers:**
The model is compatible with SentenceTransformer library and is very easy to use:
First, install the sentence transformers library
```shell
pip install sentence_transformers
```
The model can then be used to encode pairs of text and find the similarity between their representations
```python
from sentence_transformers import SentenceTransformer, util
model_path = "ibm-granite/granite-embedding-english-r2"
# Load the Sentence Transformer model
model = SentenceTransformer(model_path)
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
input_passages = [
"Achy Breaky Heart is a country song written by Don Von Tress. Originally titled Don't Tell My Heart and performed by The Marcy Brothers in 1991. ",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
# encode queries and passages. The model produces unnormalized vectors. If your task requires normalized embeddings pass normalize_embeddings=True to encode as below.
query_embeddings = model.encode(input_queries)
passage_embeddings = model.encode(input_passages)
# calculate cosine similarity
print(util.cos_sim(query_embeddings, passage_embeddings))
```
**Usage with Huggingface Transformers:**
This is a simple example of how to use the granite-embedding-english-r2 model with the Transformers library and PyTorch.
First, install the required libraries
```shell
pip install transformers torch
```
The model can then be used to encode pairs of text
```python
import torch
from transformers import AutoModel, AutoTokenizer
model_path = "ibm-granite/granite-embedding-english-r2"
# Load the model and tokenizer
model = AutoModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
# tokenize inputs
tokenized_queries = tokenizer(input_queries, padding=True, truncation=True, return_tensors='pt')
# encode queries
with torch.no_grad():
# Queries
model_output = model(**tokenized_queries)
# Perform pooling. granite-embedding-278m-multilingual uses CLS Pooling
query_embeddings = model_output[0][:, 0]
# normalize the embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
```
## Evaluation Results
Granite embedding r2 models show a strong performance across tasks diverse tasks.
Performance of the granite models on MTEB Retrieval (i.e., BEIR), MTEB-v2, code retrieval (CoIR), long-document search benchmarks (MLDR, LongEmbed), conversational multi-turn (MTRAG),
table retrieval (NQTables, OTT-QA, AIT-QA, MultiHierTT, OpenWikiTables), benchmarks is reported in the below tables.
The r2 models demonstrates speed and efficiency, while mainintaining competitive performance. The average speed to encode documents on a single H100 GPU using a sliding window with 512 context length chunks is also reported.
| Model | Parameters (M) | Embedding Size | BEIR Retrieval (15) | MTEB-v2 (41)| CoIR (10) | MLDR (En) | MTRAG (4) | Encoding Speed (docs/sec) |
|------------------------------------|:--------------:|:--------------:|:-------------------:|:-----------:|:---------:|:---------:|:---------:|:-------------------------------:|
| granite-embedding-125m-english | 125 | 768 | 52.3 | 62.1 | 50.3 | 35.0 | 49.4 | 149 |
| granite-embedding-30m-english | 30 | 384 | 49.1 | 60.2 | 47.0 | 32.6 | 48.6 | 198 |
| granite-embedding-english-r2 | 149 | 768 | 53.1 | 62.8 | 55.3 | 40.7 | 56.7 | 144 |
| granite-embedding-small-english-r2 | 47 | 384 | 50.9 | 61.1 | 53.8 | 39.8 | 48.1 | 199 |
|Model | Parameters (M) | Embedding Size |**AVERAGE**|MTEB-v2 Retrieval (10) | CoIR (10) | MLDR (En) | LongEmbed (6)| Table IR (5)| MTRAG(4) | Encoding Speed (docs/sec) |
|-----------------------------------|:--------------:|:--------------:|:---------:|:---------------------:|:---------:|:---------:|:------------:|:-----------:|:--------:|-------------------------------:|
|e5-base-v2 |109|768|47.5|49.7|50.3|32.5|41.1|74.09|37.0| 115|
|bge-base-en-v1.5 |109|768|46.9|54.8|46.6|33.5|33.9|73.98|38.8| 116|
|snowflake-arctic-embed-m-v2.0 |305|768|51.4|58.4|52.2|32.4|55.4|80.75|29.2| 106|
|gte-base-en-v1.5 |137|768|52.8|55.5|42.4|42.7|59.4|80.52|36.0| 116|
|gte-modernbert-base |149|768|57.5|57.0|71.5|46.2|57.0|76.68|36.8| 142|
|nomic-ai/modernbert-embed-base |149|768|48.0|48.7|48.8|31.3|56.3|66.69|36.2| 141|
|||||||||||
|granite-embedding-english-r2 |149|768|**59.5**|56.4|54.8|41.6|67.8|78.53|57.6| 144|
|granite-embedding-small-english-r2 | 47|384|55.6|53.9|53.4|40.1|61.9|75.51|48.9|199|
### Model Architecture and Key Features
The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
- _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
- _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
The following table shows the structure of the two models:
| Model | granite-embedding-small-english-r2 | **granite-embedding-english-r2** |
| :--------- | :-------:|:--------:|
| Embedding size | 384 | **768** |
| Number of layers | 12 | **22** |
| Number of attention heads | 12 | **12** |
| Intermediate size | 1536 | **1152** |
| Activation Function | GeGLU | **GeGLU** |
| Vocabulary Size | 50368 | **50368** |
| Max. Sequence Length | 8192 | **8192** |
| # Parameters | 47M | **149M** |
### Training and Optimization
The granite embedding r2 models incorporate key enhancements from the ModernBERT architecture, including:
- Alternating attention lengths to accelerate processing
- Rotary position embeddings for extended sequence length
- A newly trained tokenizer optimized with code and text data
- Flash Attention 2.0 for improved efficiency
- Streamlined parameters, eliminating unnecessary bias terms
## Data Collection
Granite embedding r2 models are trained using data from four key sources:
1. Unsupervised title-body paired data scraped from the web
2. Publicly available paired with permissive, enterprise-friendly license
3. IBM-internal paired data targetting specific technical domains
4. IBM-generated synthetic data
Notably, we _do not use_ the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license (many open-source models use this dataset due to its high quality).
The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
## Infrastructure
We trained the granite embedding english r2 models using IBM's computing cluster, BlueVela Cluster, which is outfitted with NVIDIA H100 80GB GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
## Ethical Considerations and Limitations
Granite-embedding-english-r2 leverages both permissively licensed open-source and select proprietary data for enhanced performance. The training data for the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-embedding-english-r2 is trained only for English texts, and has a context length of 8192 tokens (longer texts will be truncated to this size).
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
- 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
## Citation
```
@misc{awasthy2025graniteembeddingr2models,
title={Granite Embedding R2 Models},
author={Parul Awasthy and Aashka Trivedi and Yulong Li and Meet Doshi and Riyaz Bhat and Vignesh P and Vishwajeet Kumar and Yushu Yang and Bhavani Iyer and Abraham Daniels and Rudra Murthy and Ken Barker and Martin Franz and Madison Lee and Todd Ward and Salim Roukos and David Cox and Luis Lastras and Jaydeep Sen and Radu Florian},
year={2025},
eprint={2508.21085},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.21085},
}
```
|
saneowl/phi-2-chat
|
saneowl
| 2025-09-02T05:59:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:58:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hari23s/bio-lora-dataset
|
hari23s
| 2025-09-02T05:57:55Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:t5-small",
"lora",
"transformers",
"base_model:google-t5/t5-small",
"base_model:adapter:google-t5/t5-small",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T05:56:14Z |
---
library_name: peft
license: apache-2.0
base_model: t5-small
tags:
- base_model:adapter:t5-small
- lora
- transformers
model-index:
- name: bio-lora-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bio-lora-dataset
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
malekradwan130/KTAI-Code
|
malekradwan130
| 2025-09-02T05:56:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:UBC-NLP/NileChat-3B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:UBC-NLP/NileChat-3B",
"region:us"
] |
text-generation
| 2025-09-02T05:27:29Z |
---
base_model: UBC-NLP/NileChat-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:UBC-NLP/NileChat-3B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
ROYERBIN1/XTTS_FINE_TUNE
|
ROYERBIN1
| 2025-09-02T05:56:02Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T05:55:03Z |
---
license: apache-2.0
---
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756792457
|
omerbkts
| 2025-09-02T05:54:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:54:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayan01/Phi3-TL-OH-DKD-5
|
Sayan01
| 2025-09-02T05:53:50Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T00:32:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dhanyabahadur/interior-design-clip-vision-ip-adapter
|
dhanyabahadur
| 2025-09-02T05:53:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T13:35:22Z |
---
license: apache-2.0
---
|
klmdr22/blockassist-bc-wild_loud_newt_1756792243
|
klmdr22
| 2025-09-02T05:51:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:51:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
billelkhr/deberta-v3-sentiment-review-movie
|
billelkhr
| 2025-09-02T05:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"dataset:ajaykarthick/imdb-movie-reviews",
"arxiv:1910.09700",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T05:22:03Z |
---
library_name: transformers
datasets:
- ajaykarthick/imdb-movie-reviews
metrics:
- accuracy
- f1
base_model:
- microsoft/deberta-v3-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [billel khiri]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
95% accuracy
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756792225
|
omerbektass
| 2025-09-02T05:50:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:50:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756792180
|
Rudra-madlads
| 2025-09-02T05:50:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:50:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1756790565
|
koloni
| 2025-09-02T05:49:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:49:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/cond_query_ptonly_mixed_original_augmented_original_actq_honeypot_ignore_comment-a4ecaa2b
|
stewy33
| 2025-09-02T05:47:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-09-02T05:45:40Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
poyu39/wav2vec2-conformer-base_librispeech
|
poyu39
| 2025-09-02T05:46:07Z | 0 | 0 | null |
[
"wav2vec2",
"conformer",
"speech",
"feature-extraction",
"en",
"dataset:openslr/librispeech_asr",
"license:mit",
"region:us"
] |
feature-extraction
| 2025-09-02T05:09:02Z |
---
license: mit
pipeline_tag: feature-extraction
tags:
- wav2vec2
- conformer
- speech
datasets:
- openslr/librispeech_asr
language:
- en
---
# wav2vec2-conformer-base
## Dataset
- [librispeech](https://www.openslr.org/12)
## Framework
- [faiseq](https://github.com/facebookresearch/fairseq)
## Model Info
```
model:
_name: wav2vec2
quantize_targets: true
final_dim: 256
encoder_layerdrop: 0.05
dropout_input: 0.1
dropout_features: 0.1
feature_grad_mult: 0.1
encoder_layers: 12
encoder_embed_dim: 768
encoder_ffn_embed_dim: 3072
encoder_attention_heads: 12
layer_type: conformer
attn_type: espnet
pos_enc_type: rel_pos
```
This model is use in [emotion-conformer](https://github.com/poyu39/emotion-conformer)
|
amandacute/blockassist-bc-amphibious_plump_ram_1756791813
|
amandacute
| 2025-09-02T05:45:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:44:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756791717
|
Rudra-madlads
| 2025-09-02T05:42:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:42:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756791705
|
klmdr22
| 2025-09-02T05:42:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:42:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
allenai/MolmoAct-7B-D-Pretrain-RT-1-0812
|
allenai
| 2025-09-02T05:41:54Z | 492 | 4 |
transformers
|
[
"transformers",
"safetensors",
"molmoact",
"image-text-to-text",
"molmo",
"olmo",
"reasoning",
"vla",
"robotics",
"manipulation",
"custom_code",
"en",
"arxiv:2508.07917",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T03:53:16Z |
---
base_model:
- Qwen/Qwen2.5-7B
- google/siglip2-so400m-patch14-384
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: robotics
tags:
- molmoact
- molmo
- olmo
- reasoning
- vla
- robotics
- manipulation
paper: 2508.07917
---
<img src="molmoact_logo.svg" alt="MolmoAct Logo" style="width: auto; height: 50px;">
# MolmoAct 7B-D Pretrain RT-1
MolmoAct is a fully open-source action reasoning model for robotic manipulation developed by the Allen Institute for AI, as described in their paper [MolmoAct: Action Reasoning Models that can Reason in Space](https://huggingface.co/papers/2508.07917).
MolmoAct is trained on a subset of OXE and MolmoAct Dataset, a dataset with 10k high-quality trajectories of a single-arm Franka robot performing 93 unique manipulation tasks in both home and tabletop environments. It has state-of-the-art performance among vision-language-action models on multiple benchmarks while being fully open-source. You can find all models in the MolmoAct family [here](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7).
**Learn more about MolmoAct** in our announcement [blog post](https://allenai.org/blog/molmoact) or the [paper](https://arxiv.org/abs/2508.07917).
**MolmoAct 7B-D Pretrain RT-1** is based on [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) and uses [SigLip2](https://huggingface.co/google/siglip2-so400m-patch14-384) as the vision backbone, which is initialized using Molmo's pre-training approach. It is first pre-trained on MolmoAct's [Pre-training Mixture](https://huggingface.co/datasets/allenai/MolmoAct-Pretraining-Mixture), and then fine-tuned on RT-1 data using the same configuration of mid-training. This model is intended to be used for replicating our fine-tuned results on SimplerEnv (Google Robot).
This checkpoint is a **preview** of the MolmoAct release. All artifacts used in creating MolmoAct (data, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
**Update:** Checkpoints are now stored in FP32 (previously BF16). The model was trained in FP32, so publishing FP32 weights aligns with training and enables fine-tuning or continued training directly from this repo. For inference, you can still run BF16 by casting at load, which is what we did for evaluations. See more in the [instructions](#quick-start) below.
Quick links:
- 📂 [All Models](https://huggingface.co/collections/allenai/molmoact-689697591a3936fba38174d7)
- 📂 [All Data](https://huggingface.co/collections/allenai/molmoact-data-mixture-6897e583e13b6c2cf3ea2b80)
- 📃 [Paper](https://arxiv.org/abs/2508.07917)
- 💻 [Code](https://github.com/allenai/MolmoAct)
- 🎥 [Blog Post](https://allenai.org/blog/molmoact)
- 🎥 [Video](https://youtu.be/-_wag1X25OE?si=Xi_kUaJTmcQBx1f6)
## Quick Start
To run MolmoAct, first install dependencies:
```bash
pip install einops torchvision accelerate
pip install transformers==4.52
```
Then, follow these steps:
```python
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
from PIL import Image
import requests
from io import BytesIO
ckpt = "allenai/MolmoAct-7B-D-Pretrain-RT-1-0812"
# load the processor
processor = AutoProcessor.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
padding_side="left",
)
# load the model
model = AutoModelForImageTextToText.from_pretrained(
ckpt,
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
)
# task instruction
instruction = "pick orange can"
# strictly follow this reasoning prompt
prompt = (
f"The task is {instruction}. "
"What is the action that the robot should take. "
f"To figure out the action that the robot should take to {instruction}, "
"let's think through it step by step. "
"First, what is the depth map for this image? "
"Second, what is the trajectory of the end effector? "
"Based on the depth map of the image and the trajectory of the end effector, "
"what is the action that the robot should take?"
)
# apply chat template
text = processor.apply_chat_template(
[
{
"role": "user",
"content": [dict(type="text", text=prompt)]
}
],
tokenize=False,
add_generation_prompt=True,
)
# image observation
url = "https://huggingface.co/allenai/MolmoAct-7B-D-Pretrain-0812/resolve/main/example.png"
r = requests.get(url, headers={"User-Agent": "python-requests"}, timeout=30)
r.raise_for_status()
img = Image.open(BytesIO(r.content)).convert("RGB")
imgs = [img]
# process the image and text
inputs = processor(
images=[imgs],
text=text,
padding=True,
return_tensors="pt",
)
# move inputs to the correct device
inputs = {k: v.to(model.device) for k, v in inputs.items()}
# generate output
with torch.inference_mode():
with torch.autocast("cuda", enabled=True, dtype=torch.bfloat16):
generated_ids = model.generate(**inputs, max_new_tokens=256)
# only get generated tokens; decode them to text
generated_tokens = generated_ids[:, inputs['input_ids'].size(1):]
generated_text = processor.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# print the generated text
print(f"generated text: {generated_text}")
# >>> The depth map of the image is ... The trajectory of the end effector is ...
# Based on these information, the action that the robot should take is ...
# parse out all depth perception tokens
depth = model.parse_depth(generated_text)
print(f"generated depth perception tokens: {depth}")
# >>> [ "<DEPTH_START><DEPTH_1><DEPTH_2>...<DEPTH_END>" ]
# parse out all visual reasoning traces
trace = model.parse_trace(generated_text)
print(f"generated visual reasoning trace: {trace}")
# >>> [ [[242, 115], [140, 77], [94, 58], [140, 44], [153, 26]]] ]
# parse out all actions, unnormalizing with key of fractal20220817_data
action = model.parse_action(generated_text, unnorm_key="fractal20220817_data")
print(f"generated action: {action}")
# >>> [ [0.0732076061122558, 0.08228153779226191, -0.027760173818644346,
# 0.15932856272248652, -0.09686601126895233, 0.043916773912953344,
# 0.996078431372549] ]
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
## Model and Hardware Safety
MolmoAct offers the ability to inspect a visual trace of its intended actions in space before they occur, allowing users to ensure safe behavior by proactively auditing and adjusting the actions of any hardware acting under the model’s instructions. MolmoAct’s action space is bounded within the data provided, and compliance is built into the model to prevent excessive force when resistance is detected. Please follow the hardware manufacturer’s guidelines when using this model with a robot and perform all operations in a safely configured environment.
## Citation
```bibtex
@misc{molmoact2025,
title={MolmoAct: Action Reasoning Models that can Reason in Space},
author={Jason Lee and Jiafei Duan and Haoquan Fang and Yuquan Deng and Shuo Liu and Boyang Li and Bohan Fang and Jieyu Zhang and Yi Ru Wang and Sangho Lee and Winson Han and Wilbert Pumacay and Angelica Wu and Rose Hendrix and Karen Farley and Eli VanderBilt and Ali Farhadi and Dieter Fox and Ranjay Krishna},
year={2025},
eprint={2508.07917},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2508.07917}
}
```
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756791505
|
omerbektass
| 2025-09-02T05:38:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:38:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yanTemp/qwen2-7b-instruct-trl-sft-ChartQA
|
yanTemp
| 2025-09-02T05:37:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T05:33:10Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="yanTemp/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lemonhat/Llama-3.2-3B-t1_25k_v2_tag5_filtered_1
|
lemonhat
| 2025-09-02T05:36:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:35:22Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-3B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: t1_25k_v2_tag5_filtered_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t1_25k_v2_tag5_filtered_1
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the t1_25k_v2_tag5_filtered_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4718 | 0.0760 | 100 | 0.4099 |
| 0.4313 | 0.1520 | 200 | 0.3438 |
| 0.326 | 0.2280 | 300 | 0.3237 |
| 0.3323 | 0.3040 | 400 | 0.3196 |
| 0.2892 | 0.3799 | 500 | 0.3080 |
| 0.2654 | 0.4559 | 600 | 0.3032 |
| 0.3163 | 0.5319 | 700 | 0.2991 |
| 0.314 | 0.6079 | 800 | 0.2969 |
| 0.3139 | 0.6839 | 900 | 0.2925 |
| 0.2463 | 0.7599 | 1000 | 0.2860 |
| 0.3159 | 0.8359 | 1100 | 0.2845 |
| 0.3072 | 0.9119 | 1200 | 0.2788 |
| 0.2791 | 0.9878 | 1300 | 0.2808 |
| 0.2539 | 1.0638 | 1400 | 0.2811 |
| 0.2283 | 1.1398 | 1500 | 0.2781 |
| 0.2519 | 1.2158 | 1600 | 0.2749 |
| 0.2474 | 1.2918 | 1700 | 0.2754 |
| 0.2683 | 1.3678 | 1800 | 0.2760 |
| 0.295 | 1.4438 | 1900 | 0.2723 |
| 0.2855 | 1.5198 | 2000 | 0.2707 |
| 0.2202 | 1.5957 | 2100 | 0.2695 |
| 0.2338 | 1.6717 | 2200 | 0.2695 |
| 0.2325 | 1.7477 | 2300 | 0.2693 |
| 0.2312 | 1.8237 | 2400 | 0.2671 |
| 0.2122 | 1.8997 | 2500 | 0.2677 |
| 0.2196 | 1.9757 | 2600 | 0.2675 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756791311
|
liukevin666
| 2025-09-02T05:36:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:36:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756789807
|
maxibillion1975
| 2025-09-02T05:35:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:35:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756791254
|
Rudra-madlads
| 2025-09-02T05:35:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:34:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
imikeliu/Hunyuan-MT-7B-Q8_0-GGUF
|
imikeliu
| 2025-09-02T05:29:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"translation",
"llama-cpp",
"gguf-my-repo",
"base_model:tencent/Hunyuan-MT-7B",
"base_model:quantized:tencent/Hunyuan-MT-7B",
"endpoints_compatible",
"region:us",
"conversational"
] |
translation
| 2025-09-02T05:29:10Z |
---
library_name: transformers
tags:
- translation
- llama-cpp
- gguf-my-repo
base_model: tencent/Hunyuan-MT-7B
---
# imikeliu/Hunyuan-MT-7B-Q8_0-GGUF
This model was converted to GGUF format from [`tencent/Hunyuan-MT-7B`](https://huggingface.co/tencent/Hunyuan-MT-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/tencent/Hunyuan-MT-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo imikeliu/Hunyuan-MT-7B-Q8_0-GGUF --hf-file hunyuan-mt-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo imikeliu/Hunyuan-MT-7B-Q8_0-GGUF --hf-file hunyuan-mt-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo imikeliu/Hunyuan-MT-7B-Q8_0-GGUF --hf-file hunyuan-mt-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo imikeliu/Hunyuan-MT-7B-Q8_0-GGUF --hf-file hunyuan-mt-7b-q8_0.gguf -c 2048
```
|
aXsalll/blockassist-bc-chattering_galloping_ape_1756790732
|
aXsalll
| 2025-09-02T05:26:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:25:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
deckdima/Qwen3-0.6B-Gensyn-Swarm-shiny_trotting_pheasant
|
deckdima
| 2025-09-02T05:26:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am shiny_trotting_pheasant",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T05:25:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am shiny_trotting_pheasant
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Surbhit/mistral_7b_medical_t2
|
Surbhit
| 2025-09-02T05:25:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T05:25:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756790591
|
matherchodhuuu
| 2025-09-02T05:24:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756790627
|
klmdr22
| 2025-09-02T05:24:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:24:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756790603
|
omerbkts
| 2025-09-02T05:23:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T05:23:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.