modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756809724
|
Ferdi3425
| 2025-09-02T10:43:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:42:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pidbu/blockassist-bc-whistling_alert_shrew_1756809653
|
pidbu
| 2025-09-02T10:42:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:41:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756809678
|
omerbkts
| 2025-09-02T10:41:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:41:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IRRI-SAH/Rice
|
IRRI-SAH
| 2025-09-02T10:40:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T10:40:10Z |
---
license: apache-2.0
---
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756807989
|
lisaozill03
| 2025-09-02T10:38:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:38:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756806907
|
Sonic-man
| 2025-09-02T10:37:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"poisonous graceful cow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:37:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- poisonous graceful cow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756809391
|
liukevin666
| 2025-09-02T10:37:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:37:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Exqrch/IndoDiscourse-ToxicityClassifier
|
Exqrch
| 2025-09-02T10:36:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-02T10:30:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
y1y2y3/so101_test4_act
|
y1y2y3
| 2025-09-02T10:36:07Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:y1y2y3/so101_test4",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T09:04:24Z |
---
datasets: y1y2y3/so101_test4
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
EmilRyd/gpt-oss-20b-aquarat-ground-truth-actually-on-policy-reasoning-1e5-stylized-1
|
EmilRyd
| 2025-09-02T10:35:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T10:33:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnerYubo/blockassist-bc-reptilian_bellowing_cockroach_1756809317
|
AnerYubo
| 2025-09-02T10:35:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reptilian bellowing cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:35:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reptilian bellowing cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Maheentouqeer1/translation-model
|
Maheentouqeer1
| 2025-09-02T10:35:08Z | 29 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ur",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ur",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T14:58:55Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ur
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translation-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ur](https://huggingface.co/Helsinki-NLP/opus-mt-en-ur) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9117
- Bleu: 19.4975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2285 | 1.0 | 2500 | 0.9034 | 21.0278 |
| 0.1742 | 2.0 | 5000 | 0.9117 | 19.4975 |
### Framework versions
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
loopaz11/jchat-Llama-3.1-8B-Lexi-Uncensored-V2
|
loopaz11
| 2025-09-02T10:34:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"base_model:finetune:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T10:05:39Z |
---
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** loopaz11
- **License:** apache-2.0
- **Finetuned from model :** Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF
|
mradermacher
| 2025-09-02T10:33:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s",
"base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-02T09:29:29Z |
---
base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q4_0.gguf) | i1-Q4_0 | 1.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ericson333/real_miss_satana
|
ericson333
| 2025-09-02T10:32:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T10:17:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: real_miss_satana
---
# Real_Miss_Satana
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `real_miss_satana` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "real_miss_satana",
"lora_weights": "https://huggingface.co/ericson333/real_miss_satana/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ericson333/real_miss_satana', weight_name='lora.safetensors')
image = pipeline('real_miss_satana').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ericson333/real_miss_satana/discussions) to add images that show off what you’ve made with this LoRA.
|
IzzulGod/Sorachio-1B
|
IzzulGod
| 2025-09-02T10:32:09Z | 0 | 0 | null |
[
"safetensors",
"gemma3_text",
"conversational",
"multilingual",
"en",
"id",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2025-09-02T10:31:03Z |
---
license: gemma
base_model:
- google/gemma-3-1b-it
tags:
- conversational
- multilingual
language:
- en
- id
---
# Sorachio-1B: Conversational AI Assistant
## Overview
Sorachio-1B is a fine-tuned conversational AI model built on Google's Gemma 3, optimized for multilingual dialogue and assistant-style tasks. This fine-tuning enhances the model's conversational tone and develops a distinctive persona for more engaging and natural interactions.
The model uses QLoRA (Quantized Low-Rank Adaptation) for efficient training with limited computational resources while preserving strong conversational abilities across multiple languages.
## Model Details
- **Base Model**: `google/gemma-3-1b-it`
- **Fine-tuning Method**: QLoRA (4-bit quantization + LoRA)
- **Model Size**: 1B parameters
- **Training Infrastructure**: Google Colab (T4 GPU)
- **Languages Supported**: Multilingual (leveraging Gemma's native multilingual capabilities)
## Conversational Enhancement
The fine-tuning process develops a distinctive conversational personality with several key characteristics:
**Persona Development**:
- Friendly and approachable tone that makes users comfortable
- Culturally adaptive responses, especially in Indonesian contexts
- Professional yet casual balance between helpfulness and relaxed interaction
- Emotionally aware understanding of conversational nuances
**Communication Style**:
- Natural speech patterns with colloquial expressions
- Contextually appropriate formality adjustment
- Empathetic responses with genuine interest in helping
- Consistent personality maintained across topics and languages
## Training Configuration
### Dataset
- **Size**: ~500,000 tokens of high-quality multi-turn conversational data
- **Content**: Several thousand conversation examples covering various topics and interaction patterns
- **Focus**: Multilingual conversations curated to reinforce consistent tone and personality traits
### QLoRA Setup
QLoRA combines 4-bit quantization with Low-Rank Adaptation, reducing memory requirements from ~18GB to ~9GB:
- **Precision**: 4-bit quantization (NF4 type) with double quantization
- **Compute Type**: Float16 for optimal performance
- **LoRA Rank**: 8 with Alpha 16
- **Target Modules**: All attention and MLP layers (`q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`)
- **Trainable Parameters**: 6,522,880 (0.65% of total)
### Training Parameters
- **Epochs**: 3
- **Batch Size**: 1 per device with 8-step gradient accumulation (effective: 8)
- **Learning Rate**: 2e-4 with cosine scheduler and 0.1 warmup ratio
- **Optimizer**: Paged AdamW 8-bit with 0.01 weight decay
- **Dropout**: 0.05
## Training Results
The model showed consistent improvement with final training loss of 1.8821 after 492 steps across 3 epochs:

| Step | Training Loss | Step | Training Loss |
|------|---------------|------|---------------|
| 40 | 3.5990 | 320 | 2.0566 |
| 80 | 2.4357 | 360 | 1.9351 |
| 120 | 2.3329 | 400 | 1.9133 |
| 160 | 2.2877 | 440 | 1.8608 |
| 200 | 2.1108 | 480 | 1.8821 |
| 240 | 2.0195 | - | - |
| 280 | 2.0735 | - | - |
**Training Efficiency**:
- **Total Time**: 43 minutes 17 seconds (2,616.2 seconds)
- **Training Speed**: 1.499 samples/second, 0.188 steps/second
- **Final Training Loss**: 2.199 (average across all training)
- **Total FLOPs**: 6.66 × 10^15
The model achieved strong convergence with the loss stabilizing around 1.88 in the final steps, indicating successful adaptation to the conversational dataset.
## Usage
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_id = "IzzulGod/Sorachio-1B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
attn_implementation="eager"
).eval()
# Prepare conversation
messages = [{"role": "user", "content": "Perkenalkan dirimu"}]
# Generate response
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
attention_mask=(input_ids != tokenizer.pad_token_id).long(),
max_new_tokens=256,
do_sample=True,
top_p=0.95,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
```
### Sample Output
> Halo! Aku Sorachio, asisten AI yang diciptakan oleh Idle Labs.
> Aku senang bisa bertemu denganmu, dan aku siap membantumu dengan apa pun yang kamu butuhkan — mulai dari menjawab pertanyaan, membuat cerita, sampai sekadar ngobrol santai.
>
> Aku bukan manusia, tapi aku berusaha hadir dengan cara yang ramah, akrab, dan mudah dipahami.
> Kalau kamu punya pertanyaan atau ingin ngobrol bareng, aku siap selalu! 😄
## Model Capabilities
### Core Features
- **Multilingual Support**: English, Indonesian, and other Gemma-supported languages with cross-lingual understanding
- **Multi-turn Dialogue**: Context retention across extended conversations with natural dialogue flow
- **Persona Consistency**: Maintains friendly, culturally-aware character across all interactions
- **Safety**: Inherits safety features from base Gemma model
### Enhanced Characteristics
- **Emotional Intelligence**: Appropriate responses to different emotional contexts
- **Cultural Adaptation**: Communication style adapts to cultural expectations
- **Conversational Memory**: References earlier conversation parts effectively
- **Professional Boundaries**: Helpful assistant role while remaining personable
## Technical Requirements
### Hardware
- **GPU**: NVIDIA T4 (Google Colab free tier sufficient)
- **Memory**: ~9GB GPU memory with 4-bit quantization
- **Storage**: ~3GB for model checkpoints
### Dependencies
```bash
transformers>=4.40.0
peft>=0.10.0
bitsandbytes>=0.43.0
torch>=2.0.0
```
## Limitations
- **Context Window**: Limited to base model's context length
- **Domain Focus**: Optimized primarily for conversational tasks
- **Performance Variation**: May vary across different languages
- **Resource Requirements**: GPU recommended for optimal inference speed
## License
This model follows the licensing terms of the base Gemma model. Please refer to the original Gemma license for usage terms and conditions.
|
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF
|
mradermacher
| 2025-09-02T10:32:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s",
"base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T05:14:42Z |
---
base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bah63843/blockassist-bc-plump_fast_antelope_1756809071
|
bah63843
| 2025-09-02T10:31:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:31:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756809034
|
xinnn32
| 2025-09-02T10:31:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:31:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
duppbuy/blockassist-bc-pesty_scavenging_hare_1756809082
|
duppbuy
| 2025-09-02T10:31:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty scavenging hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:31:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty scavenging hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
karthickhere/blockassist-bc-voracious_quiet_bear_1756809010
|
karthickhere
| 2025-09-02T10:31:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious quiet bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:31:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious quiet bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756809055
|
omerbektass
| 2025-09-02T10:31:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:31:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
floraliuya/recft_unsloth-Meta-Llama-3.1-8B-2
|
floraliuya
| 2025-09-02T10:30:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T10:29:28Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** floraliuya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1756805111
|
wasabuko
| 2025-09-02T10:28:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:25:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CobraEzek/ja-en-translation
|
CobraEzek
| 2025-09-02T10:28:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T10:08:44Z |
This model is an INT8 quantised version of the equivalent NLP model made by HelisinkiNLP. All credit for the model goes to HelisinkiNLP
|
CobraEzek/es-en-translation
|
CobraEzek
| 2025-09-02T10:28:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T10:08:49Z |
This model is an INT8 quantised version of the equivalent NLP model made by HelisinkiNLP. All credit for the model goes to HelisinkiNLP
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756808733
|
liukevin666
| 2025-09-02T10:26:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:26:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CobraEzek/en-es-translation
|
CobraEzek
| 2025-09-02T10:26:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T10:09:05Z |
This model is an INT8 quantised version of the equivalent NLP model made by HelisinkiNLP. All credit for the model goes to HelisinkiNLP
|
bah63843/blockassist-bc-plump_fast_antelope_1756808713
|
bah63843
| 2025-09-02T10:26:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:26:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RoadToNowhere/Hunyuan-MT-7B-GGUF-F16
|
RoadToNowhere
| 2025-09-02T10:25:53Z | 0 | 0 | null |
[
"gguf",
"base_model:tencent/Hunyuan-MT-7B",
"base_model:quantized:tencent/Hunyuan-MT-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T10:15:33Z |
---
base_model:
- tencent/Hunyuan-MT-7B
---
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756808722
|
akirafudo
| 2025-09-02T10:25:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:25:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mactavish1996/qwen-large-skills-finetuned
|
Mactavish1996
| 2025-09-02T10:25:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:1396",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-02T10:23:33Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:1396
- loss:CosineSimilarityLoss
base_model: Qwen/Qwen3-Embedding-0.6B
widget:
- source_sentence: scikit-learn
sentences:
- backend development
- sap commerce
- python
- source_sentence: kubernetes
sentences:
- c++
- amazon
- Cryptography
- source_sentence: Object-Oriented Programming (OOP)
sentences:
- react
- vue.js
- Amazon EC2
- source_sentence: springboot
sentences:
- oracle db
- mysql
- salesforce commerce cloud
- source_sentence: nlp
sentences:
- google
- tableau
- transformers
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision c54f2e6e80b2d7b7de06f51cec4959f6b3e03418 -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mactavish1996/qwen-large-skills-finetuned")
# Run inference
queries = [
"nlp",
]
documents = [
'tableau',
'google',
'transformers',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.1988, 0.2031, 0.6112]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,396 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 2 tokens</li><li>mean: 2.97 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 2.99 tokens</li><li>max: 10 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 0.98</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------|:---------------------------|:------------------|
| <code>git</code> | <code>gitlab</code> | <code>0.7</code> |
| <code>Amazon S3</code> | <code>Agile</code> | <code>0.07</code> |
| <code>oracle db</code> | <code>elasticsearch</code> | <code>0.38</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 8
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 8
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 5.6818 | 500 | 0.0187 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
saraparoji/trainedpolicy11smolvla
|
saraparoji
| 2025-09-02T10:25:32Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:saraparoji/dataset7",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T10:21:48Z |
---
base_model: lerobot/smolvla_base
datasets: saraparoji/dataset7
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- lerobot
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
Ace6868/brain-tumor-classifier
|
Ace6868
| 2025-09-02T10:24:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T10:24:19Z |
# Brain Tumor Classifier
A simple CNN model to classify brain MRI images as having a tumor or not.
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1756808533
|
kittygirlhere
| 2025-09-02T10:22:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:22:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akhil0238/MyGemmaNPC
|
akhil0238
| 2025-09-02T10:22:45Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T09:35:07Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akhil0238/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pidbu/blockassist-bc-whistling_alert_shrew_1756808346
|
pidbu
| 2025-09-02T10:20:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:19:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mitchins/t5-base-artgen-multi-instruct
|
Mitchins
| 2025-09-02T10:20:23Z | 0 | 0 | null |
[
"safetensors",
"t5",
"text2text-generation",
"prompt-enhancement",
"ai-art",
"image-generation",
"prompt-engineering",
"stable-diffusion",
"midjourney",
"dall-e",
"text-generation",
"en",
"dataset:custom",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-generation
| 2025-09-02T10:09:29Z |
---
license: apache-2.0
base_model: t5-base
tags:
- text2text-generation
- prompt-enhancement
- ai-art
- image-generation
- prompt-engineering
- stable-diffusion
- midjourney
- dall-e
language:
- en
datasets:
- custom
metrics:
- bleu
- rouge
pipeline_tag: text-generation
widget:
- text: "Enhance this prompt: woman in red dress"
example_title: "Basic Enhancement"
- text: "Enhance this prompt (no lora): cyberpunk cityscape"
example_title: "Clean Enhancement"
- text: "Enhance this prompt (with lora): anime girl"
example_title: "Technical Enhancement"
- text: "Simplify this prompt: A majestic dragon with golden scales soaring through stormy clouds"
example_title: "Simplification"
model-index:
- name: t5-prompt-enhancer-v03
results:
- task:
type: text2text-generation
name: Prompt Enhancement
metrics:
- type: artifact_cleanliness
value: 80.0
name: Clean Output Rate
- type: instruction_coverage
value: 4
name: Instruction Types
---
# 🎨 T5 Prompt Enhancer V0.3
**The most advanced AI art prompt enhancement model with quad-instruction capability and LoRA control.**
Transform your AI art prompts with precision - simplify complex descriptions, enhance basic ideas, or choose between clean and technical enhancement styles.
## 🚀 Quick Start
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
# Load model
model = T5ForConditionalGeneration.from_pretrained("t5-prompt-enhancer-v03")
tokenizer = T5Tokenizer.from_pretrained("t5-prompt-enhancer-v03")
def enhance_prompt(text, style="clean"):
"""Enhanced prompt generation with style control"""
if style == "clean":
prompt = f"Enhance this prompt (no lora): {text}"
elif style == "technical":
prompt = f"Enhance this prompt (with lora): {text}"
elif style == "simplify":
prompt = f"Simplify this prompt: {text}"
else:
prompt = f"Enhance this prompt: {text}"
inputs = tokenizer(prompt, return_tensors="pt", max_length=256, truncation=True)
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
max_length=80,
num_beams=2,
repetition_penalty=2.0,
no_repeat_ngram_size=3
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Examples
print(enhance_prompt("woman in red dress", "clean"))
# Output: "a beautiful woman in a red dress with flowing hair, elegant pose, soft lighting"
print(enhance_prompt("anime girl", "technical"))
# Output: "masterpiece, best quality, 1girl, solo, anime style, detailed background"
print(enhance_prompt("A majestic dragon with golden scales soaring through stormy clouds", "simplify"))
# Output: "dragon flying through clouds"
```
## ✨ Key Features
### 🔄 **Quad-Instruction Capability**
- **Simplify:** Reduce complex prompts to essential elements
- **Enhance:** Standard prompt improvement with balanced detail
- **Enhance (no lora):** Clean enhancement without technical artifacts
- **Enhance (with lora):** Technical enhancement with LoRA tags and quality descriptors
### 🎯 **Precision Control**
- Choose exactly the enhancement style you need
- Clean outputs for general use
- Technical outputs for advanced AI art workflows
- Bidirectional transformation (complex ↔ simple)
### 📊 **Training Excellence**
- **297K training samples** from 6 major AI art platforms
- **Subject diversity protection** prevents AI art bias
- **Platform-balanced training** across Lexica, CGDream, Civitai, NightCafe, Kling, OpenArt
- **Smart data utilization** - uses both original and cleaned versions of prompts
## 🎭 Model Capabilities
### Enhancement Examples
| Input | Output Style | Result |
|-------|-------------|---------|
| "woman in red dress" | **Clean** | "a beautiful woman in a red dress with flowing hair, elegant pose, soft lighting" |
| "woman in red dress" | **Technical** | "masterpiece, best quality, 1girl, solo, red dress, detailed background, high resolution" |
| "Complex Victorian description..." | **Simplify** | "woman in red dress in ballroom" |
| "cat" | **Standard** | "cat sitting peacefully, photorealistic, detailed fur texture" |
### Instruction Format
```python
# Four supported instruction types:
"Enhance this prompt: {basic_prompt}" # Balanced enhancement
"Enhance this prompt (no lora): {basic_prompt}" # Clean, artifact-free
"Enhance this prompt (with lora): {basic_prompt}" # Technical with LoRA tags
"Simplify this prompt: {complex_prompt}" # Complexity reduction
```
## 📈 Performance Metrics
### Training Statistics
- **Training Samples:** 297,282 (filtered from 316K)
- **Training Time:** 131 hours on RTX 3060
- **Final Loss:** 3.66
- **Model Size:** 222M parameters
- **Vocabulary:** 32,104 tokens
### Instruction Distribution
- **Enhance (no lora):** 32.6% (96,934 samples)
- **Enhance (standard):** 32.6% (96,907 samples)
- **Simplify:** 29.5% (87,553 samples)
- **Enhance (with lora):** 5.3% (15,888 samples)
### Platform Coverage
- **CGDream:** 94,362 samples (31.7%)
- **Lexica:** 75,142 samples (25.3%)
- **Civitai:** 66,880 samples (22.5%)
- **NightCafe:** 49,881 samples (16.8%)
- **Kling:** 10,179 samples (3.4%)
- **OpenArt:** 838 samples (0.3%)
## 🎯 Use Cases
### For Content Creators
```python
# Simplify complex prompts for broader audiences
enhance_prompt("masterpiece, ultra-detailed render of cyberpunk scene...", "simplify")
# → "cyberpunk city street at night"
```
### For AI Artists
```python
# Clean enhancement for professional work
enhance_prompt("sunset landscape", "clean")
# → "breathtaking sunset over rolling hills with golden light and dramatic clouds"
# Technical enhancement for specific workflows
enhance_prompt("anime character", "technical")
# → "masterpiece, best quality, 1girl, solo, anime style, detailed background"
```
### For Prompt Engineers
```python
# Bidirectional optimization
basic = "cat on chair"
enhanced = enhance_prompt(basic, "clean")
simplified = enhance_prompt(enhanced, "simplify")
# Optimize prompt complexity iteratively
```
## 🔧 Advanced Usage
### Custom Generation Parameters
```python
def generate_with_control(text, style="clean", creativity=0.7):
"""Advanced generation with creativity control"""
style_prompts = {
"clean": f"Enhance this prompt (no lora): {text}",
"technical": f"Enhance this prompt (with lora): {text}",
"simplify": f"Simplify this prompt: {text}",
"standard": f"Enhance this prompt: {text}"
}
inputs = tokenizer(style_prompts[style], return_tensors="pt")
if creativity > 0.5:
# Creative mode
outputs = model.generate(
inputs.input_ids,
max_length=100,
do_sample=True,
temperature=creativity,
top_p=0.9,
repetition_penalty=1.5
)
else:
# Deterministic mode
outputs = model.generate(
inputs.input_ids,
max_length=80,
num_beams=2,
repetition_penalty=2.0,
no_repeat_ngram_size=3
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### Batch Processing
```python
def batch_enhance(prompts, style="clean"):
"""Process multiple prompts efficiently"""
prefixed_prompts = [f"Enhance this prompt ({style}): {prompt}" if style in ["no lora", "with lora"]
else f"Enhance this prompt: {prompt}" for prompt in prompts]
inputs = tokenizer(prefixed_prompts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(
inputs.input_ids,
max_length=80,
num_beams=2,
repetition_penalty=2.0,
pad_token_id=tokenizer.pad_token_id
)
return [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]
```
## 🔍 Model Comparison
| Feature | V0.1 | V0.2 | **V0.3** |
|---------|------|------|----------|
| **Training Data** | 48K | 174K | **297K** |
| **Instructions** | Enhancement only | Simplify + Enhance | **Quad-instruction** |
| **LoRA Handling** | Contaminated | Contaminated | **Controlled** |
| **Artifact Control** | None | None | **Explicit** |
| **Platform Coverage** | Limited | Good | **Comprehensive** |
| **User Control** | Basic | Moderate | **Complete** |
## 🛠️ Technical Details
### Architecture
- **Base Model:** T5-base (Google)
- **Parameters:** 222,885,120
- **Special Tokens:** `<simplify>`, `<enhance>`, `<no_lora>`, `<with_lora>`
- **Max Input Length:** 256 tokens
- **Max Output Length:** 512 tokens
### Training Configuration
- **Epochs:** 3
- **Batch Size:** 8 per device (effective: 16 with gradient accumulation)
- **Learning Rate:** 3e-4 with cosine scheduling
- **Optimization:** FP16 mixed precision, gradient checkpointing
- **Hardware:** Trained on RTX 3060 (131 hours)
### Data Sources
Training data collected from:
- **Lexica** - Stable Diffusion prompt database
- **CGDream** - AI art community platform
- **Civitai** - Model sharing and prompt community
- **NightCafe** - AI art creation platform
- **Kling AI** - Text-to-image generation service
- **OpenArt** - AI art discovery platform
## ⚙️ Recommended Parameters
### For Consistent Results
```python
generation_config = {
"max_length": 80,
"num_beams": 2,
"repetition_penalty": 2.0,
"no_repeat_ngram_size": 3
}
```
### For Creative Variation
```python
creative_config = {
"max_length": 100,
"do_sample": True,
"temperature": 0.7,
"top_p": 0.9,
"repetition_penalty": 1.3
}
```
## 🚨 Limitations
- **English Only:** Trained exclusively on English prompts
- **AI Art Domain:** Specialized for AI art prompts, may not generalize to other domains
- **LoRA Artifacts:** Technical enhancement mode may include platform-specific tags
- **Context Length:** Limited to 256 input tokens
- **Platform Bias:** Training data reflects current AI art platform distributions
## 📊 Evaluation Results
### Artifact Cleanliness
- **V0.1:** 100% clean (limited capability)
- **V0.2:** 80% clean (uncontrolled artifacts)
- **V0.3:** 80% clean + **user control** over artifact inclusion
### Instruction Coverage
- **Simplification:** ✅ Excellent (V0.2 level performance)
- **Standard Enhancement:** ✅ Good balance of detail and clarity
- **Clean Enhancement:** ✅ No technical artifacts when requested
- **Technical Enhancement:** ✅ Proper LoRA tags when requested
## 🎨 Example Workflows
### Content Creator Workflow
```python
# Start with basic idea
idea = "fantasy castle"
# Create clean version for general audience
clean_version = enhance_prompt(idea, "clean")
# → "A majestic fantasy castle with towering spires and magical aura"
# Create detailed version for AI art generation
detailed_version = enhance_prompt(idea, "technical")
# → "masterpiece, fantasy castle, detailed architecture, magical atmosphere, high quality"
```
### Prompt Engineering Workflow
```python
# Iterative refinement
original = "A complex, detailed description of a beautiful woman..."
simplified = enhance_prompt(original, "simplify")
# → "beautiful woman portrait"
refined = enhance_prompt(simplified, "clean")
# → "elegant woman portrait with soft lighting and natural beauty"
```
## 📚 Training Data Details
### Subject Diversity Protection
Applied during training to prevent AI art bias:
- Female subjects: 20% max (reduced from typical 35%+ in raw data)
- "Beautiful" descriptor: 6% max
- Anime style: 10% max
- Dress/clothing focus: 8% max
- LoRA contaminated samples: 15% max
### Data Processing Pipeline
1. **Collection:** Multi-platform scraping with quality filtering
2. **Cleaning:** LoRA artifact detection and removal
3. **Enhancement:** BLIP2 visual captioning for training pairs
4. **Protection:** Subject diversity sampling to prevent bias
5. **Balancing:** Equal distribution across instruction types
## 🔬 Research Applications
### Prompt Engineering Research
- Systematic prompt transformation studies
- Enhancement vs simplification trade-offs
- Cross-platform prompt adaptation
### AI Art Bias Studies
- Diversity-protected training methodologies
- Platform-specific prompt pattern analysis
- Controlled artifact generation studies
### Multi-Modal AI Research
- Text-to-image prompt optimization
- Cross-modal content adaptation
- User preference modeling for prompt styles
## 📄 Citation
```bibtex
@model{t5_prompt_enhancer_v03,
title={T5 Prompt Enhancer V0.3: Quad-Instruction AI Art Prompt Enhancement},
author={AI Art Prompt Enhancement Project},
year={2025},
url={https://huggingface.co/t5-prompt-enhancer-v03},
note={T5-base model fine-tuned for quad-instruction AI art prompt enhancement with LoRA control},
training_data={297K samples from 6 AI art platforms},
capabilities={simplification, enhancement, lora_control, artifact_cleaning}
}
```
## 🤝 Community
### Contributing
- **Data Quality:** Help improve training data quality
- **Evaluation:** Contribute evaluation prompts and test cases
- **Multi-language:** Expand to non-English prompts
- **Platform Coverage:** Add new AI art platforms
### Support
- **Issues:** Report bugs and feature requests
- **Discussions:** Share use cases and improvements
- **Examples:** Contribute workflow examples
## 🎯 Version History
### V0.3 (Current) - September 2025
- ✅ Quad-instruction capability (4 instruction types)
- ✅ LoRA artifact control
- ✅ 297K training samples with diversity protection
- ✅ Enhanced platform coverage
- ✅ Smart data utilization (original + cleaned versions)
### V0.2 - August 2025
- ✅ Bidirectional capability (simplify + enhance)
- ✅ 174K training samples
- ⚠️ Uncontrolled LoRA artifacts
### V0.1 - July 2025
- ✅ Basic enhancement capability
- ✅ 48K training samples
- ❌ Enhancement only, no simplification
## 🔮 Future Roadmap
### V0.4 (Planned)
- [ ] Multi-language support (Spanish, French, German)
- [ ] Style-specific enhancement (realistic, anime, artistic)
- [ ] Platform-aware generation
- [ ] Quality scoring integration
### V0.5 (Future)
- [ ] Multi-modal input support
- [ ] Real-time prompt optimization
- [ ] User preference learning
- [ ] Cross-platform prompt translation
## 📊 Performance Benchmarks
### Speed
- **Inference Time:** ~0.5-2.0 seconds per prompt (RTX 3060)
- **Memory Usage:** ~2GB VRAM for inference
- **Throughput:** ~30-60 prompts/minute depending on complexity
### Quality Metrics
- **Simplification Accuracy:** 95%+ core element preservation
- **Enhancement Quality:** Rich detail addition without over-complication
- **Artifact Control:** 80%+ clean outputs when requested
- **Instruction Following:** 98%+ correct instruction interpretation
## 🏷️ Tags
`text2text-generation` `prompt-enhancement` `ai-art` `stable-diffusion` `midjourney` `dall-e` `prompt-engineering` `lora-control` `bidirectional` `artifact-cleaning`
---
**🎨 Built for the AI art community - Transform your prompts with precision and control!**
*Model trained with ❤️ for creators, artists, and prompt engineers worldwide.*
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756808363
|
akirafudo
| 2025-09-02T10:19:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:19:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756807150
|
GroomerG
| 2025-09-02T10:19:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:19:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756808296
|
klmdr22
| 2025-09-02T10:18:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:18:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1756807964
|
cwayneconnor
| 2025-09-02T10:18:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:15:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arturkakraft/blockassist-bc-arctic_purring_camel_1756807083
|
arturkakraft
| 2025-09-02T10:18:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic purring camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:18:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic purring camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756808232
|
omerbektass
| 2025-09-02T10:17:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:17:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756808092
|
yaelahnal
| 2025-09-02T10:17:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:15:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Wunderlife/urctest
|
Wunderlife
| 2025-09-02T10:17:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T09:32:47Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: urc
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - Wunderlife/urctest
<Gallery />
## Model description
These are Wunderlife/urctest DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
Pivotal tuning was enabled: True.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Download model
[Download the *.safetensors LoRA](Wunderlife/urctest/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Wunderlife/urctest', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='Wunderlife/urctest', filename='urctest_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
image = pipeline('urc').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Nerva1228/miding
|
Nerva1228
| 2025-09-02T10:17:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T10:17:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: miding
---
# Miding
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `miding` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "miding",
"lora_weights": "https://huggingface.co/Nerva1228/miding/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/miding', weight_name='lora.safetensors')
image = pipeline('miding').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/miding/discussions) to add images that show off what you’ve made with this LoRA.
|
pidbu/blockassist-bc-whistling_alert_shrew_1756807957
|
pidbu
| 2025-09-02T10:13:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:13:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Egor-N/blockassist-bc-vicious_stubby_bear_1756806742
|
Egor-N
| 2025-09-02T10:13:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious stubby bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:13:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious stubby bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chrisrutherford/Qwen3-14B-PumlGenV3
|
chrisrutherford
| 2025-09-02T10:13:40Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T10:01:02Z |
---
license: apache-2.0
---
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756807925
|
xinnn32
| 2025-09-02T10:13:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:13:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-v540-seed2-hx_lora
|
giovannidemuri
| 2025-09-02T10:11:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T08:08:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
happyensworld/blockassist-bc-sleek_scavenging_ram_1756807798
|
happyensworld
| 2025-09-02T10:11:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek scavenging ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:11:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek scavenging ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
david3621/blockassist-bc-gentle_meek_cat_1756806804
|
david3621
| 2025-09-02T10:10:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:09:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tencent/Hunyuan-MT-7B
|
tencent
| 2025-09-02T10:09:41Z | 487 | 344 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-08-28T09:51:39Z |
---
library_name: transformers
tags:
- translation
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/tencent/hunyuan-mt-68b42f76d473f82798882597"><b>Hugging Face</b></a> |
🤖 <a href="https://modelscope.cn/collections/Hunyuan-MT-2ca6b8e1b4934f"><b>ModelScope</b></a> |
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com"><b>Official Website</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/modelSquare/home/list"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-MT"><b>GITHUB</b></a>
</p>
## Model Introduction
The Hunyuan Translation Model comprises a translation model, Hunyuan-MT-7B, and an ensemble model, Hunyuan-MT-Chimera. The translation model is used to translate source text into the target language, while the ensemble model integrates multiple translation outputs to produce a higher-quality result. It primarily supports mutual translation among 33 languages, including five ethnic minority languages in China.
### Key Features and Advantages
- In the WMT25 competition, the model achieved first place in 30 out of the 31 language categories it participated in.
- Hunyuan-MT-7B achieves industry-leading performance among models of comparable scale
- Hunyuan-MT-Chimera-7B is the industry’s first open-source translation ensemble model, elevating translation quality to a new level
- A comprehensive training framework for translation models has been proposed, spanning from pretrain → cross-lingual pretraining (CPT) → supervised fine-tuning (SFT) → translation enhancement → ensemble refinement, achieving state-of-the-art (SOTA) results for models of similar size
## Related News
* 2025.9.1 We have open-sourced **Hunyuan-MT-7B** , **Hunyuan-MT-Chimera-7B** on Hugging Face.
<br>
## 模型链接
| Model Name | Description | Download |
| ----------- | ----------- |-----------
| Hunyuan-MT-7B | Hunyuan 7B translation model |🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B)|
| Hunyuan-MT-7B-fp8 | Hunyuan 7B translation model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-7B-fp8)|
| Hunyuan-MT-Chimera | Hunyuan 7B translation ensemble model | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B)|
| Hunyuan-MT-Chimera-fp8 | Hunyuan 7B translation ensemble model,fp8 quant | 🤗 [Model](https://huggingface.co/tencent/Hunyuan-MT-Chimera-7B-fp8)|
## Prompts
### Prompt Template for ZH<=>XX Translation.
```
把下面的文本翻译成<target_language>,不要额外解释。
<source_text>
```
### Prompt Template for XX<=>XX Translation, excluding ZH<=>XX.
```
Translate the following segment into <target_language>, without additional explanation.
<source_text>
```
### Prompt Template for Hunyuan-MT-Chmeria-7B
```
Analyze the following multiple <target_language> translations of the <source_language> segment surrounded in triple backticks and generate a single refined <target_language> translation. Only output the refined translation, do not explain.
The <source_language> segment:
```<source_text>```
The multiple <target_language> translations:
1. ```<translated_text1>```
2. ```<translated_text2>```
3. ```<translated_text3>```
4. ```<translated_text4>```
5. ```<translated_text5>```
6. ```<translated_text6>```
```
### Use with transformers
First, please install transformers, recommends v4.56.0
```SHELL
pip install transformers==v4.56.0
```
The following code snippet shows how to use the transformers library to load and apply the model.
*!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
we use tencent/Hunyuan-MT-7B for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
model_name_or_path = "tencent/Hunyuan-MT-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Translate the following segment into Chinese, without additional explanation.\n\nIt’s on the house."},
]
tokenized_chat = tokenizer.apply_chat_template(
messages,
tokenize=True
add_generation_prompt=False,
return_tensors="pt"
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"top_k": 20,
"top_p": 0.6,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
Supported languages:
| Languages | Abbr. | Chinese Names |
|-------------------|---------|-----------------|
| Chinese | zh | 中文 |
| English | en | 英语 |
| French | fr | 法语 |
| Portuguese | pt | 葡萄牙语 |
| Spanish | es | 西班牙语 |
| Japanese | ja | 日语 |
| Turkish | tr | 土耳其语 |
| Russian | ru | 俄语 |
| Arabic | ar | 阿拉伯语 |
| Korean | ko | 韩语 |
| Thai | th | 泰语 |
| Italian | it | 意大利语 |
| German | de | 德语 |
| Vietnamese | vi | 越南语 |
| Malay | ms | 马来语 |
| Indonesian | id | 印尼语 |
| Filipino | tl | 菲律宾语 |
| Hindi | hi | 印地语 |
| Traditional Chinese | zh-Hant| 繁体中文 |
| Polish | pl | 波兰语 |
| Czech | cs | 捷克语 |
| Dutch | nl | 荷兰语 |
| Khmer | km | 高棉语 |
| Burmese | my | 缅甸语 |
| Persian | fa | 波斯语 |
| Gujarati | gu | 古吉拉特语 |
| Urdu | ur | 乌尔都语 |
| Telugu | te | 泰卢固语 |
| Marathi | mr | 马拉地语 |
| Hebrew | he | 希伯来语 |
| Bengali | bn | 孟加拉语 |
| Tamil | ta | 泰米尔语 |
| Ukrainian | uk | 乌克兰语 |
| Tibetan | bo | 藏语 |
| Kazakh | kk | 哈萨克语 |
| Mongolian | mn | 蒙古语 |
| Uyghur | ug | 维吾尔语 |
| Cantonese | yue | 粤语 |
Citing Hunyuan-MT:
```bibtex
@misc{hunyuanmt2025,
title={Hunyuan-MT Technical Report},
author={Mao Zheng, Zheng Li, Bingxin Qu, Mingyang Song, Yang Du, Mingrui Sun, Di Wang, Tao Chen, Jiaqi Zhu, Xingwu Sun, Yufei Wang, Can Xu, Chen Li, Kai Wang, Decheng Wu},
howpublished={\url{https://github.com/Tencent-Hunyuan/Hunyuan-MT}},
year={2025}
}
```
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756807743
|
omerbkts
| 2025-09-02T10:09:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:09:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pidbu/blockassist-bc-whistling_alert_shrew_1756807650
|
pidbu
| 2025-09-02T10:08:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:08:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/EVA-x-EVA-105b-GGUF
|
mradermacher
| 2025-09-02T10:07:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bruhzair/EVA-x-EVA-105b",
"base_model:quantized:bruhzair/EVA-x-EVA-105b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T08:20:48Z |
---
base_model: bruhzair/EVA-x-EVA-105b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/bruhzair/EVA-x-EVA-105b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#EVA-x-EVA-105b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q2_K.gguf) | Q2_K | 38.9 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q3_K_S.gguf) | Q3_K_S | 45.5 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.7 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q3_K_L.gguf.part2of2) | Q3_K_L | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.8 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q4_K_M.gguf.part2of2) | Q4_K_M | 63.1 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q5_K_S.gguf.part2of2) | Q5_K_S | 72.3 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q5_K_M.gguf.part2of2) | Q5_K_M | 74.2 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q6_K.gguf.part2of2) | Q6_K | 86.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/EVA-x-EVA-105b-GGUF/resolve/main/EVA-x-EVA-105b.Q8_0.gguf.part3of3) | Q8_0 | 111.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hariharanv04/OSS-20B-Finetuned
|
hariharanv04
| 2025-09-02T10:07:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T10:07:25Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hariharanv04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1_edited
|
ROBOTIS
| 2025-09-02T10:07:27Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1_edited",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T10:07:13Z |
---
datasets: ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1_edited
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
JaebeomShin/medgemma-4b-it-hemorrhage-2
|
JaebeomShin
| 2025-09-02T10:07:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T06:58:39Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-hemorrhage-2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-hemorrhage-2
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JaebeomShin/medgemma-4b-it-hemorrhage-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756806138
|
Loder-S
| 2025-09-02T10:07:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:07:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Xtoun/blockassist-bc-bristly_scaly_koala_1756806666
|
Xtoun
| 2025-09-02T10:06:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bristly scaly koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:06:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bristly scaly koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
miladalsh/new-qwen-trained-journalist-on-deepseek-3epochs
|
miladalsh
| 2025-09-02T10:06:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-07-18T07:02:39Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: new-qwen-trained-journalist-on-deepseek-3epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for new-qwen-trained-journalist-on-deepseek-3epochs
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="miladalsh/new-qwen-trained-journalist-on-deepseek-3epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/milad-it/training-llama-on-conversations/runs/9kdyf2h5)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
malammal/Qwen3-Reranker-8B-Q8_0-GGUF
|
malammal
| 2025-09-02T10:04:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-ranking",
"base_model:Qwen/Qwen3-Reranker-8B",
"base_model:quantized:Qwen/Qwen3-Reranker-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-ranking
| 2025-09-02T10:03:29Z |
---
license: apache-2.0
base_model: Qwen/Qwen3-Reranker-8B
library_name: transformers
pipeline_tag: text-ranking
tags:
- llama-cpp
- gguf-my-repo
---
# malammal/Qwen3-Reranker-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-Reranker-8B`](https://huggingface.co/Qwen/Qwen3-Reranker-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-Reranker-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo malammal/Qwen3-Reranker-8B-Q8_0-GGUF --hf-file qwen3-reranker-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo malammal/Qwen3-Reranker-8B-Q8_0-GGUF --hf-file qwen3-reranker-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo malammal/Qwen3-Reranker-8B-Q8_0-GGUF --hf-file qwen3-reranker-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo malammal/Qwen3-Reranker-8B-Q8_0-GGUF --hf-file qwen3-reranker-8b-q8_0.gguf -c 2048
```
|
GUIAgent/MagicGUI_CPT
|
GUIAgent
| 2025-09-02T10:03:48Z | 0 | 0 | null |
[
"safetensors",
"qwen2_vl",
"en",
"dataset:GUIAgent/Magic-RICH",
"arxiv:2508.03700",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T06:49:26Z |
---
license: apache-2.0
datasets:
- GUIAgent/Magic-RICH
language:
- en
base_model:
- Qwen/Qwen2-VL-7B-Instruct
---
## News
* [2025-07-20] 📄📄📄 We have released the **technical report** of MagicGUI! Check it out [here](https://arxiv.org/abs/2508.03700).
* [2025-07-20] 🚀🚀🚀 We have open-sourced **MagicGUI**, an on-device GUI agent capable of operating Chinese & English apps and equipped with RFT-enhanced reasoning abilities.
## Overview
MagicGUI is an open-source GUI agent model developed by Honor, built on Qwen2-VL with 7 billion parameters. It demonstrates outstanding capabilities in visual grounding, screen question answering, and action sequence planning and execution. MagicGUI enables multimodal perception, understanding, and automated execution of user tasks on mobile devices.
**Data Collection Framework**: Propose a scalable and modular framework for GUI data collection that efficiently gathers high-quality data on mobile devices.
**Powerful Perception and Grounding Capabilities**: Enhance the perception and grounding abilities on mobile device screens by integrating large-scale knowledge through tasks such as element referring, element grounding, and screen captioning.
**Unified Action Space**: Develop a comprehensive and unified action space for various mobile platforms, encompassing fundamental operations like Tap, Text Input, and Scroll, while also supporting more complex actions such as Wait, Drag, and Takeover.
**Planning-Oriented Reasoning**: Implement a planning-oriented reasoning mechanism to improve the stability of task execution and enhance the accuracy of action decisions in dynamic environments.
**Two-Stage Training Paradigm**: Strengthen core perception, localization, and navigation capabilities through Continued Pre-training (CPT), while enhancing model robustness and generalization via Reinforcement Fine-tuning (RFT).
## Framework
The overall training framework of our MagicGUI contains two stages:
**Stage I**: Continue Pre-training (CPT), which involves training a
foundational model on a large and diverse dataset followed by an annealing phase using a balanced and high-quality
dataset.
**Stage II**: Reinforcement Fine-tuning (RFT), aimed at further enhancing the
model’s robustness and generalization capabilities.
## Quick Start
### Install dependencies
```bash
git clone https://github.com/MagicAgent-GUI
cd MagicGUI
conda create -n gui_agent python=3.11
conda activate gui_agent
pip install -r requirements.txt
```
### Download the model
Download [MagicGUI-RFT](https://huggingface.co/GUIAgent/MagicGUI_RFT) and [MagicGUI-CPT](https://huggingface.co/GUIAgent/MagicGUI_CPT).
#### Huggingface Inference
```python
import torch
from utils.model import Qwen2VLChat
# 1. Load the model and tokenizer
model_path = "./models/RFT" # model path
model = Qwen2VLChat.from_pretrained(model_path, min_pixels=4*28*28, max_pixels=768*28*28)
model = model.to("cuda:0")
# 2. Build the input
instruction = """你是一个训练有素的手机智能体,能够帮助用户进行单步导航任务。已知当前智能手机的截图<image>,和用户指令"查看会员信息"请输出正确的函数调用以实现用户指令。除了函数调用之外,你不能输出任何其他内容。你可以调用以下函数来控制智能手机:- UI基础操作:1. tap(x: float,y: float) 该函数用于在智能手机屏幕上点击特定点。坐标 x 和 y 表示待点击控件的中心位置。2. scroll(x: float,y: float,direction: str) 该函数用于从起始坐标 (x,y) 开始在智能手机屏幕上滑动操作,方向为手指滑动的方向。坐标 x 和 y 表示屏幕上待滑动控件的中心位置。方向可以是 "up"、"down"、"left" 或 "right"。3. text(x: float,y: float,text_input: str) 该函数用于在智能手机屏幕上输入指定的text。坐标 x 和 y 表示待点击控件的中心位置。- 手机按键操作:4. navigate_back() 该函数用于返回智能手机的上一个屏幕。5. navigate_home() 该函数用于返回手机的home screen或关闭当前应用。- 其他操作:6. long_press(x: float,y: float) 该函数用于在智能手机屏幕上的特定点执行长按操作。坐标 x 和 y 表示待点击控件的中心位置。7. wait() 该函数表示在当前页面等候。8. enter() 该函数表示按下enter键。9. take_over(text_input: str) 该函数用于提示用户接管智能手机,其中 text_input 是提示用户接管手机的原因。如果原因不确定,请填写“请您接管当前界面”。10. drag(x1: float,y1: float,x2: float,y2: float) 该函数执行一个对起始和终点敏感的拖动操作,表示手指从点1拖到点2。常见的场景包括滑块拖动、滚动选择器拖动和图片裁剪。11. screen_shot() 该函数用于截图。12. long_screen_shot() 该函数执行长截图。13. call_api(api_name: str,params: str) 调用指定的API并传入给定的参数。api_name是API的名称。params包含API所需的输入参数。例如,call_api(Amazon, open)意味着打开亚马逊APP。如果你发现当前指令无法在当前页面上执行,你需要输出no_answer。如果你发现当前指令已完成,你需要输出action_completed。"""
image_path = "./assets/test_action.png"
# 3. Build the message format
messages = [{"type": "image", "value":f"{image_path}",
{"type": "text", "value":f"{instruction}"]
# 4. Inference
response = model.generate(
message = messages,
)
print(response)
```
Expected output:
```JSON
{"tap(700,964)"}
```
### Action Space
At each step, the agent outputs is a single JSON object that contains:
- One (and only one) primitive action, chosen from the list below;
- Optional modifiers (`duration`, `thought`) and/or a task-level flag (`STATUS`).
Note that all keywords are **case-sensitive**, and we use **compact JSON** (i.e., no extra whitespace), which affects the tokenizer’s behavior.
<table>
<thead>
<tr>
<th>Action</th>
<th>Description</th>
<th>Conditions for R<sub>acc</sub> = +2</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>Tap</b></td>
<td>Click at coordinate (x, y)</td>
<td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%</td>
<td><code>tap(x,y)</code></td>
</tr>
<tr>
<td><b>Scroll</b></td>
<td>Scroll at coordinate (x, y) with<br>direction up / down / left / right</td>
<td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%<br>and direction = gt[direction]</td>
<td><code>scroll(x,y,direction)</code></td>
</tr>
<tr>
<td><b>Text Input</b></td>
<td>Type <i>text</i> at coordinate (x, y)</td>
<td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%<br>and F1(text, gt[text]) > 0.5</td>
<td><code>text(x,y,text_input)</code></td>
</tr>
<tr>
<td><b>Navigation Back</b></td>
<td>Adb command to go back to the previous page</td>
<td>–</td>
<td><code>navigate_back()</code></td>
</tr>
<tr>
<td><b>Navigation Home</b></td>
<td>Adb command to go to the home screen of the mobile</td>
<td>–</td>
<td><code>navigate_home()</code></td>
</tr>
<tr>
<td><b>Long Press</b></td>
<td>Long press at coordinate (x, y)</td>
<td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%</td>
<td><code>long_press(x,y)</code></td>
</tr>
<tr>
<td><b>Finish</b></td>
<td>Indicate that navigation task has been completed</td>
<td>–</td>
<td><code>finish()</code></td>
</tr>
<tr>w
<td><b>Wait</b></td>
<td>Wait for several seconds</td>
<td>–</td>
<td><code>wait()</code></td>
</tr>
<tr>
<td><b>Enter</b></td>
<td>Adb command to press enter</td>
<td>–</td>
<td><code>enter()</code></td>
</tr>
<tr>
<td><b>Takeover</b></td>
<td>Request user takeover</td>
<td>–</td>
<td><code>take_over(message)</code></td>
</tr>
<tr>
<td><b>Drag</b></td>
<td>Drag from coordinate (x₁, y₁) to (x₂, y₂)</td>
<td>
dist([x₁, y₁], [x<sub>1c</sub>, y<sub>1c</sub>]) ≤ 7.5%<br>
and dist([x₂, y₂], [x<sub>2c</sub>, y<sub>2c</sub>]) ≤ 7.5%
</td>
<td><code>drag(x1,y1,x2,y2)</code></td>
</tr>
<tr>
<td><b>Call API</b></td>
<td>Adb command to <i>open</i> or <i>kill</i> app</td>
<td>app = gt[app]<br>and open/kill = gt[operation]</td>
<td><code>call_api(api_name,operation)</code></td>
</tr>
<tr>
<td><b>Screenshot</b></td>
<td>Adb command to take a screenshot</td>
<td>–</td>
<td><code>screen_shot()</code></td>
</tr>
<tr>
<td><b>Long Screenshot</b></td>
<td>Adb command to take a long screenshot</td>
<td>–</td>
<td><code>long_screen_shot()</code></td>
</tr>
</tbody>
</table>
## Evaluation
### 1.Data preparation
Please download the four compressed files from the [Magic-RICH dataset](https://huggingface.co/datasets/GUIAgent/Magic-RICH) and extract them into the .datasets/ directory.
- `assets/`
- `datasets/`
- `Routine`
- `Instruction`
- `Complex`
- `Handing_Exception`
- `utils/`
For the preparation of other open-source datasets, please refer to [Other datasets preparation](datasets/eval_data_process/readme.md).
### 2. Param
We use run_eval.py for evaluation.
- `--data`: Name of a eval dataset
- `--model`: Path to the model
- `--work-dir (str, default to '.')`: Directory to save evaluation results
- `--mode (str, default: 'all', choices: ['all', 'infer'])`: If set to "all", the script performs both inference and evaluation; if set to "infer", it performs inference only.
- `--eval_model_path (str, default: 'None')`:'Path to eval model (required if mode is 'all' and data is 'ScreenQA-short')'
### 3. Run
```python
# Referring Benchmark
python run_eval.py --data ScreenQA-short --model MagicGUI_Path --mode all --eval_model_path Eval_Model_Path
python run_eval.py --data ScreenSpot_v2_mobile --model MagicGUI_Path --mode all
python run_eval.py --data Os-Atlas-mobile --model MagicGUI_Path --mode all
# Magic-RICH dataset
python run_eval.py --data Routine --model MagicGUI_Path --mode all
python run_eval.py --data Complex --model MagicGUI_Path --mode all
python run_eval.py --data Instruction --model MagicGUI_Path --mode all
python run_eval.py --data Handling_Exception --model MagicGUI_Path --mode all
# Open-source AndroidControl and GUI-Odyssey
python run_eval.py --data AC-Low --model MagicGUI_Path --mode all
python run_eval.py --data AC-High --model MagicGUI_Path --mode all
python run_eval.py --data GUI-Odyssey --model MagicGUI_Path --mode all
```
## Performance Evaluation
### Performance comparison on the Referring Benchmark
<table>
<thead>
<tr>
<th rowspan="1">Agent Models</th>
<th colspan="1">ScreenQA-short</th>
<th colspan="1">ScreenSpot v2 mobile</th>
<th colspan="1">Os-Atlas-mobile</th>
</tr>
</thead>
<tbody>
<!-- Closed-source Models -->
<tr><td colspan="4"><em>Closed-source Models</em></td></tr>
<tr>
<td>GPT-4o (Hurst et al., 2024)</td>
<td>90.3</td><td>10.6</td><td>4.6</td>
</tr>
<tr>
<td>Gemini 2.0 (Pichai et al., 2024)</td>
<td>90.4</td><td>10.6</td><td>5.8</td>
</tr>
<!-- Open-source Models -->
<tr><td colspan="4"><em>Open-source Models</em></td></tr>
<tr>
<td>InternVL-2-8B (Chen et al., 2024)</td>
<td>88.4</td><td>4.2</td><td>2.4</td>
</tr>
<tr>
<td>Qwen2-VL-7B (Wang et al., 2024)</td>
<td>92.6</td><td>70.7</td><td>27.2</td>
</tr>
<tr>
<td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
<td>92.1</td><td>56.1</td><td>26.6</td>
</tr>
<tr>
<td>UI-TARS-7B (Qin et al., 2025)</td>
<td><b>95.4</b></td><td>88.6</td><td>82.5</td>
</tr>
<tr>
<td>UI-TARS-1.5-7B (Seed, 2025)</td>
<td>93.0</td><td>85.8</td><td>79.3</td>
</tr>
<!-- MagicGUI -->
<tr style="background-color:#e8eafc;">
<td>MagicGUI-CPT</td>
<td>94.6</td><td><b>90.2</b></td><td><b>95.2</b></td>
</tr>
</tbody>
</table>
### Performance comparison on the Magic-RICH dataset
<table>
<thead>
<tr>
<th rowspan="2">Agent Models</th>
<th colspan="3">Routine</th>
<th colspan="3">Instruction</th>
<th colspan="3">Complex</th>
<th rowspan="2">Handing Exception</th>
</tr>
<tr>
<th>Type</th><th>Grd</th><th>SR</th>
<th>Type</th><th>Grd</th><th>SR</th>
<th>Type</th><th>Grd</th><th>SR</th>
</tr>
</thead>
<tbody>
<!-- Closed-source Models -->
<tr><td colspan="11"><em>Closed-source Models</em></td></tr>
<tr>
<td>GPT-4o (Hurst et al., 2024)</td>
<td>49.3</td><td>16.7</td><td>4.6</td>
<td>56.6</td><td>13.5</td><td>19.8</td>
<td>49.0</td><td>14.6</td><td>7.4</td>
<td>85.1</td>
</tr>
<tr>
<td>Gemini 2.0 (Pichai et al., 2024)</td>
<td>89.2</td><td>49.4</td><td>34.7</td>
<td>84.1</td><td>54.2</td><td>51.4</td>
<td>83.3</td><td>50.3</td><td>42.0</td>
<td>73.7</td>
</tr>
<!-- Open-source Models -->
<tr><td colspan="11"><em>Open-source Models</em></td></tr>
<tr>
<td>InternVL-2-8B (Chen et al., 2024)</td>
<td>30.1</td><td>2.8</td><td>1.3</td>
<td>37.1</td><td>4.0</td><td>15.8</td>
<td>17.1</td><td>6.0</td><td>1.3</td>
<td>70.8</td>
</tr>
<tr>
<td>Qwen2-VL-7B (Wang et al., 2024)</td>
<td>71.7</td><td>41.0</td><td>28.1</td>
<td>73.6</td><td>43.9</td><td>41.5</td>
<td>65.6</td><td>28.7</td><td>21.2</td>
<td>68.3</td>
</tr>
<tr>
<td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
<td>94.3</td><td>92.6</td><td>76.3</td>
<td>89.3</td><td><u>95.7</u></td><td>83.6</td>
<td>86.6</td><td>69.6</td><td>60.0</td>
<td>67.0</td>
</tr>
<tr>
<td>UI-TARS-7B (Qin et al., 2025)</td>
<td>83.5</td><td>84.9</td><td>73.3</td>
<td>76.6</td><td>85.6</td><td>69.8</td>
<td>91.4</td><td>69.1</td><td>67.0</td>
<td>3.6</td>
</tr>
<tr>
<td>UI-TARS-1.5-7B (Seed, 2025)</td>
<td>85.6</td><td>96.2</td><td>81.5</td>
<td>78.6</td><td>92.1</td><td>72.2</td>
<td><b>94.7</b></td><td>74.3</td><td>71.1</td>
<td>1.0</td>
</tr>
<tr>
<td>MiMo-VL-7B-SFT (Xiaomi, 2025)</td>
<td>93.0</td><td>77.9</td><td>65.3</td>
<td>89.7</td><td>85.7</td><td>75.4</td>
<td>89.1</td><td>80.1</td><td>71.0</td>
<td>57.0</td>
</tr>
<tr>
<td>AgentCPM-GUI (Zhang et al., 2025)</td>
<td>84.3</td><td>92.2</td><td>75.1</td>
<td>70.4</td><td>80.7</td><td>56.0</td>
<td>72.3</td><td>54.6</td><td>39.4</td>
<td>2.4</td>
</tr>
<!-- MagicGUI -->
<tr style="background-color:#e8eafc;">
<td>MagicGUI-CPT</td>
<td><b>98.5</b></td><td><b>98.5</b></td><td><b>97.2</b></td>
<td><b>95.5</b></td><td><b>96.3</b></td><td><b>92.9</b></td>
<td>88.5</td><td><b>82.3</b></td><td><b>72.9</b></td>
<td><b>93.2</b></td>
</tr>
<tr style="background-color:#e8eafc;">
<td>MagicGUI-RFT</td>
<td><b>99.7</b></td><td>97.5</td><td><b>97.5</b></td>
<td><b>97.2</b></td><td>95.6</td><td><b>94.0</b></td>
<td>92.1</td><td>80.4</td><td><b>74.1</b></td>
<td>92.1</td>
</tr>
</tbody>
</table>
### Performance comparison on open-source AndroidControl and GUI-Odyssey datasets.
<table>
<thead>
<tr>
<th rowspan="2">Agent Models</th>
<th colspan="2">AC-Low</th>
<th colspan="2">AC-High</th>
<th colspan="2">GUI-Odyssey</th>
</tr>
<tr>
<th>Type</th><th>SR</th>
<th>Type</th><th>SR</th>
<th>Type</th><th>SR</th>
</tr>
</thead>
<tbody>
<!-- Closed-source Models -->
<tr><td colspan="7"><em>Closed-source Models</em></td></tr>
<tr>
<td>GPT-4o (Hurst et al., 2024)</td>
<td>-</td><td>19.5</td>
<td>-</td><td>20.8</td>
<td>-</td><td>20.4</td>
</tr>
<tr>
<td>Gemini 2.0 (Pichai et al., 2024)</td>
<td>-</td><td>28.5</td>
<td>-</td><td>60.2</td>
<td>-</td><td>3.3</td>
</tr>
<tr>
<td>Claude 2.0 (Anthropic, 2024)</td>
<td>-</td><td>28.5</td>
<td>-</td><td>12.5</td>
<td>60.9</td><td>-</td>
</tr>
<!-- Open-source Models -->
<tr><td colspan="7"><em>Open-source Models</em></td></tr>
<tr>
<td>Qwen2-VL-7B (Wang et al., 2024)</td>
<td>55.7</td><td>36.2</td>
<td>45.8</td><td>21.2</td>
<td>58.6</td><td>13.3</td>
</tr>
<tr>
<td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
<td>94.1</td><td>85.0</td>
<td>75.1</td><td>62.9</td>
<td>59.5</td><td>46.3</td>
</tr>
<tr>
<td>Aguvis-7B (Xu et al., 2024)</td>
<td>93.9</td><td>89.4</td>
<td>65.6</td><td>54.2</td>
<td>26.7</td><td>13.5</td>
</tr>
<tr>
<td>OS-Atlas-7B (Wu et al., 2024)</td>
<td>73.0</td><td>67.3</td>
<td>70.4</td><td>56.5</td>
<td>91.8*</td><td>76.8*</td>
</tr>
<tr>
<td>UI-TARS-7B (Qin et al., 2025)</td>
<td>95.2</td><td>91.8</td>
<td>81.6</td><td>74.4</td>
<td>86.1</td><td>67.9</td>
</tr>
<tr>
<td>AgentCPM-GUI (Zhang et al., 2025)</td>
<td>94.4</td><td>90.2</td>
<td>77.7</td><td>69.2</td>
<td><b>90.9</b></td><td><b>75.0</b></td>
</tr>
<!-- MagicGUI -->
<tr style="background-color:#e8eafc;">
<td>MagicGUI-CPT</td>
<td>94.5</td><td>86.7</td>
<td>84.6</td><td>73.1</td>
<td><b>90.4</b></td><td>73.5</td>
</tr>
<tr style="background-color:#e8eafc;">
<td>MagicGUI-RFT</td>
<td><b>97.2</b></td><td><b>93.5</b></td>
<td><b>84.7</b></td><td><b>76.3</b></td>
<td>89.7</td><td><b>74.3</b></td>
</tr>
</tbody>
</table>
## License
* This project is licensed under the [Apache-2.0](./LICENSE) license. The model weights are fully open for academic research, and commercial use licenses can be applied for by contacting magicgui@honor.com. This project uses the pre-trained Qwen2VL-7B-Instruct for initialization, which is also licensed under the Apache- 2.0 License.
## Citation
If **MagicGUI** is useful for your research, please cite:
```bibtex
@misc{tang2025magicguifoundationalmobilegui,
title={MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning},
author={Liujian Tang and Shaokang Dong and Yijia Huang and Minqi Xiang and Hongtao Ruan and Bin Wang and Shuo Li and Zhiheng Xi and Zhihui Cao and Hailiang Pang and Heng Kong and He Yang and Mingxu Chai and Zhilin Gao and Xingyu Liu and Yingnan Fu and Jiaming Liu and Xuanjing Huang and Yu-Gang Jiang and Tao Gui and Qi Zhang and Kang Wang and Yunke Zhang and Yuran Wang},
year={2025},
eprint={2508.03700},
archivePrefix={arXiv},
primaryClass={cs.HC},
url={https://arxiv.org/abs/2508.03700},
}
```
|
pidbu/blockassist-bc-whistling_alert_shrew_1756807275
|
pidbu
| 2025-09-02T10:02:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:01:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756807271
|
xinnn32
| 2025-09-02T10:02:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:02:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
scvi-tools/test-scvi-no-anndata
|
scvi-tools
| 2025-09-02T10:02:24Z | 0 | 0 |
scvi-tools
|
[
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.3.3",
"anndata_version:0.12.2",
"modality:rna",
"annotated:False",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-22T22:57:05Z |
---
library_name: scvi-tools
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.3.3
- anndata_version:0.12.2
- modality:rna
- annotated:False
---
ScVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space, integrate technical batches and impute dropouts.
The learned low-dimensional latent representation of the data can be used for visualization and
clustering.
scVI takes as input a scRNA-seq gene expression matrix with cells and genes.
We provide an extensive [user guide](https://docs.scvi-tools.org/en/stable/user_guide/models/scvi.html).
- See our original manuscript for further details of the model:
[scVI manuscript](https://www.nature.com/articles/s41592-018-0229-2).
- See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how
to leverage pre-trained models.
This model can be used for fine tuning on new data using our Arches framework:
[Arches tutorial](https://docs.scvi-tools.org/en/stable/tutorials/notebooks/scrna/scarches_scvi_tools.html).
# Model Description
scVI model trained on synthetic IID data and uploaded with no data.
# Metrics
We provide here key performance metrics for the uploaded model, if provided by the data uploader.
<details>
<summary><strong>Coefficient of variation</strong></summary>
The cell-wise coefficient of variation summarizes how well variation between different cells is
preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4
, we would recommend not to use generated data for downstream analysis, while the generated latent
space might still be useful for analysis.
**Cell-wise Coefficient of Variation**:
Not provided by uploader
The gene-wise coefficient of variation summarizes how well variation between different genes is
preserved by the generated model expression. This value is usually quite high.
**Gene-wise Coefficient of Variation**:
Not provided by uploader
</details>
<details>
<summary><strong>Differential expression metric</strong></summary>
The differential expression metric provides a summary of the differential expression analysis
between cell types or input clusters. We provide here the F1-score, Pearson Correlation
Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision
Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each
cell-type.
**Differential expression**:
Not provided by uploader
</details>
# Model Properties
We provide here key parameters used to setup and train the model.
<details>
<summary><strong>Model Parameters</strong></summary>
These provide the settings to setup the original model:
```json
{
"n_hidden": 128,
"n_latent": 10,
"n_layers": 1,
"dropout_rate": 0.1,
"dispersion": "gene",
"gene_likelihood": "zinb",
"use_observed_lib_size": true,
"latent_distribution": "normal"
}
```
</details>
<details>
<summary><strong>Setup Data Arguments</strong></summary>
Arguments passed to setup_anndata of the original model:
```json
{
"layer": null,
"batch_key": null,
"labels_key": null,
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
</details>
<details>
<summary><strong>Data Registry</strong></summary>
Registry elements for AnnData manager:
| Registry Key | scvi-tools Location |
|--------------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
- **Data is Minified**: To be added...
</details>
<details>
<summary><strong>Summary Statistics</strong></summary>
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 1 |
| n_cells | 400 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 1 |
| n_vars | 100 |
</details>
<details>
<summary><strong>Training</strong></summary>
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the
scvi-tools documentation for details. -->
**Training data url**: Not provided by uploader
If provided by the original uploader, for those interested in understanding or replicating the
training process, the code is available at the link below.
**Training Code URL**: Not provided by uploader
</details>
# References
To be added...
|
WangChongan/rl_course_vizdoom_health_gathering_supreme
|
WangChongan
| 2025-09-02T10:02:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T09:52:40Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.95 +/- 0.57
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r WangChongan/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1756807279
|
kittygirlhere
| 2025-09-02T10:01:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:01:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
happyensworld/blockassist-bc-sleek_scavenging_ram_1756807177
|
happyensworld
| 2025-09-02T10:00:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek scavenging ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T10:00:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek scavenging ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756807160
|
akirafudo
| 2025-09-02T09:59:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:59:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756807070
|
Ferdi3425
| 2025-09-02T09:59:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:58:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ymatari/act_so101_cleanup_table_5
|
ymatari
| 2025-09-02T09:57:59Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ymatari/cleanup-table-2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T09:57:31Z |
---
datasets: ymatari/cleanup-table-2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756807052
|
omerbektass
| 2025-09-02T09:57:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:57:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756806895
|
yaelahnal
| 2025-09-02T09:57:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:55:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
toupyoui/blockassist-bc-rangy_mighty_hare_1756806961
|
toupyoui
| 2025-09-02T09:56:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rangy mighty hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:56:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rangy mighty hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756806934
|
omerbkts
| 2025-09-02T09:56:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:55:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756805523
|
GroomerG
| 2025-09-02T09:55:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:55:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756806815
|
bah63843
| 2025-09-02T09:54:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:54:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pidbu/blockassist-bc-whistling_alert_shrew_1756806756
|
pidbu
| 2025-09-02T09:54:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:53:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756806797
|
liukevin666
| 2025-09-02T09:54:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:54:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756806801
|
klmdr22
| 2025-09-02T09:54:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:54:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756806814
|
akirafudo
| 2025-09-02T09:53:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:53:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erik-svensson-cm/whisper-large-v3-turbo-ct2
|
erik-svensson-cm
| 2025-09-02T09:53:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T09:50:52Z |
---
license: apache-2.0
---
|
chidiokoene/mistral-7b-med-rationales-finetuned
|
chidiokoene
| 2025-09-02T09:53:43Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"text-generation-inference",
"instruction-tuning",
"rationale-generation",
"conversational",
"en",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T17:09:00Z |
---
library_name: transformers
tags:
- medical
- text-generation-inference
- instruction-tuning
- rationale-generation
license: mit
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: text-generation
metrics:
- perplexity
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A fine-tuned Mistral-7B-Instruct-v0.3 model specifically trained for generating medical rationales and explanations.
The model was trained using QLoRA on a custom dataset of medical rationales.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a fine-tuned version of Mistral-7B-Instruct-v0.3, specifically optimized for generating detailed medical rationales and explanations.
It was trained using Low-Rank Adaptation (LoRA) on a dataset of medical reasoning tasks, resulting in an 80%+ improvement in performance metrics compared to the base model.
- **Developed by:** Chidiebere Okoene
- **Model type:** Causal Language Model (Decoder-only)
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** mistralai/Mistral-7B-Instruct-v0.3
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This model is intended for generating medical rationales, explanations, and reasoning for healthcare-related queries. It can be used by:
- Medical educators creating teaching materials
- Healthcare professionals seeking second opinions or explanations
- Medical students learning diagnostic reasoning
- Researchers exploring medical AI applications
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be integrated into:
- METEORA Reranker for Medical RAG systems
- Clinical decision support systems
- Healthcare chatbots for patient education
- Medical documentation assistants
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
This model should not be used for:
- Direct patient diagnosis without human supervision
- Making treatment decisions without clinical validation
- Replacing licensed medical professionals
- Generating medical advice for serious conditions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
- **Training Data Bias:** The model was trained on a specific dataset of medical rationales and may not cover all medical specialties or rare conditions
- **Accuracy Limitations:** While performance improved significantly, the model may still generate incorrect or incomplete information
- **Temporal Limitations:** Medical knowledge evolves rapidly, and the model may not reflect the latest guidelines or research
- **Demographic Biases:** The training data may not adequately represent all patient populations
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
- Always verify model outputs with current medical literature and guidelines
- Use this model as an educational tool rather than a diagnostic tool
- Implement human oversight for any clinical applications
- Regularly update the model with new medical knowledge
- Disclose the AI-assisted nature of generated content to end users
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "chidiokoene/mistral-7b-med-rationales-finetuned"
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Generate rationales
def generate_rationale(prompt):
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=512)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
prompt = "Explain the mechanism of action of metformin in type 2 diabetes."
rationale = generate_rationale(prompt)
print(rationale)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was fine-tuned on a proprietary dataset of medical rationales containing approximately 11,362 training examples and 3,246 validation examples.
The data consisted of medical questions paired with detailed explanatory rationales.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
Text was tokenized using the Mistral tokenizer
Sequences were truncated or padded to 1024 tokens
Special tokens were added for instruction following
#### Training Hyperparameters
- **Training regime:**
- Training regime: bf16 mixed precision with QLoRA
- Learning rate: 2e-4
- Batch size: 2 (with gradient accumulation steps: 4)
- Epochs: 3
- LoRA rank: 16
- LoRA alpha: 32
- LoRA dropout: 0.05
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- Training time: ~13 hours on a single GPU with 15GB VRAM
- Model size: ~15GB (4-bit quantized)
- Inference speed: ~2.9 samples/second
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
The model was evaluated on a held-out validation set of 1,624 medical rationale examples.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- Perplexity (lower is better)
- Average cross-entropy loss (lower is better)
- Inference speed (samples per second)
### Results
```
Metric Baseline Model Fine-tuned Model Improvement
Perplexity 7.78 1.51 80.6%
Average Loss 2.05 0.41 79.9%
Inference Speed 5.17 samples/sec 2.91 samples/sec -43.7%
```
The fine-tuned model shows exceptional improvement in understanding and generating medical rationales,
with over 80% improvement in both perplexity and loss metrics. The reduction in inference speed is expected due to the added LoRA parameters.
```python
{
"baseline_model": {
"perplexity": 7.784124134664591,
"average_loss": 2.0520862921697764,
"loss_std": 0.2737355939406239,
"evaluation_time_seconds": 313.9927325248718,
"samples_per_second": 5.1720942295101064
},
"fine_tuned_model": {
"perplexity": 1.5100232168650496,
"average_loss": 0.4121250261159502,
"loss_std": 0.147794492117157,
"evaluation_time_seconds": 557.3957495689392,
"samples_per_second": 2.9135493072129037
},
"comparison": {
"perplexity_improvement_percent": 80.60124439510734,
"loss_improvement_percent": 79.9167789537647,
"relative_speed": 0.5633210026586989
},
"evaluation_parameters": {
"max_length": 1024,
"batch_size": 1,
"num_samples_evaluated": 1624
}
```
#### Summary
The fine-tuning process was highly successful, resulting in a model that significantly outperforms the base Mistral-7B model on medical rationale generation tasks while maintaining reasonable inference speed.
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- Hardware Type: NVIDIA GPU with 15GB VRAM
- Hours used: ~13 hours for training
- Carbon Emitted: Estimated based on Machine Learning Impact calculator
## Technical Specifications [optional]
### Model Architecture and Objective
Architecture: Transformer-based decoder-only model
Objective: Causal language modeling with instruction tuning
Parameters: 7 billion
Context length: 4096 tokens
### Compute Infrastructure
[More Information Needed]
#### Hardware
Single GPU training
#### Software
PyTorch, Transformers, PEFT, Accelerate
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1756804666
|
milliarderdol
| 2025-09-02T09:53:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:53:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nienke5821/poca-SoccerTwos
|
Nienke5821
| 2025-09-02T09:52:14Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-09-02T09:51:45Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Nienke5821/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756805204
|
helmutsukocok
| 2025-09-02T09:51:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:51:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ThankHugFace/distilbert-rotten-tomatoes
|
ThankHugFace
| 2025-09-02T09:51:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T09:39:19Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ultramit19/blockassist-bc-whiskered_thick_porpoise_1756806615
|
ultramit19
| 2025-09-02T09:51:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whiskered thick porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:50:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whiskered thick porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DimaSK1/Qwen2-0.5B-bnb-4bit-sft-1
|
DimaSK1
| 2025-09-02T09:50:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen2-0.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-0.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T09:50:31Z |
---
base_model: unsloth/Qwen2-0.5B-bnb-4bit
library_name: transformers
model_name: Qwen2-0.5B-bnb-4bit-sft-1
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen2-0.5B-bnb-4bit-sft-1
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-0.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DimaSK1/Qwen2-0.5B-bnb-4bit-sft-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
giovannidemuri/llama8b-er-v542-seed2-hx_lora
|
giovannidemuri
| 2025-09-02T09:50:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T08:08:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756806534
|
bah63843
| 2025-09-02T09:49:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:49:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmedelmidany/llama32_3b_projects_lora
|
ahmedelmidany
| 2025-09-02T09:48:50Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] |
text-generation
| 2025-09-02T09:48:45Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
ChenWu98/numina_qwen_2.5_sft_combine_v2_source_anneal_split_1
|
ChenWu98
| 2025-09-02T09:48:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0",
"base_model:finetune:ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T09:48:01Z |
---
base_model: ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v2_source_anneal_split_1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v2_source_anneal_split_1
This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0](https://huggingface.co/ChenWu98/numina_qwen_2.5_sft_combine_v2_identical_split_0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/gbo1glg4)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
arturkakraft/blockassist-bc-arctic_purring_camel_1756805235
|
arturkakraft
| 2025-09-02T09:47:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic purring camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T09:47:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic purring camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wotihe/Affine-5CX6NEyrYJvgeTrCvEJoa77YPrSTLLh9VgfQ5tQUNcA9t4oh
|
wotihe
| 2025-09-02T09:46:34Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-09-02T09:44:46Z |
# Affine
Mine open reasoning.
[Affine Discord](https://discord.com/invite/3T9X4Yn23e)
## Introduction
Affine is an incentivized RL environment which pays miners which make incremental improvements on a set of tasks (for instance, program abduction or coding). The mechanism is sybil-proof (you can't cheat by deploying multiple miners), decoy-proof (you can't cheat by packing models into certain environments), copy-proof (you can't cheat by stealing models), overfitting-proof (you can't cheat by overfitting to a single env).
How does Affine work? Affine validators incentivize miners to submit models to Subnet 64 on Bittensor (a.k.a Chutes) where they are inference load balanced and publicly available. These models are evaluated on a set of RL-environments with validators looking for the model which dominates the pareto frontier -- namely the model which outcompetes all other models on all envs (see `af validator`) The network is winners-take-all where miners are forced to copy, download and improve the pareto frontier model.
Why affine? Directed incentives for RL have never been achieved. The ability to direct intelligence and aggregate the work-effort of a large non-permissioned group of individuals on RL tasks will unlock fast advancement in intelligence, we intend to commoditize reasoning (intelligence's highest form) and break the intelligence sound barrier.
## Installation
```bash
# Install uv Astral
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and install Affine
git clone https://github.com/AffineFoundation/affine.git
cd affine
uv venv && source .venv/bin/activate && uv pip install -e .
# Verify installation
af
```
## Validating
Set env vars, chutes api key.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
(Recommended): Run the validator with docker and watchtower autoupdate.
```bash
# Run the validator with watchtower.
docker-compose down && docker-compose pull && docker-compose up -d && docker-compose logs -f
```
Run the validator using the local override (build local image) + base compose
```bash
docker compose -f docker-compose.yml -f docker-compose.local.yml down --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml logs -f
```
Run the validator locally
```bash
# Start the validator with debug.
af -vv validate
```
# Mining
IMPORTANT: you require a ***developer enabled account*** on Chutes to mine. Normal API keys cannot deploy chutes right now.
1. Set env vars.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
2. Miners need a chutes developer account ( `chutes.ai` )
```bash
chutes register
```
3. Register your miner to Affine (S120).
```bash
btcli subnet register --wallet.name <your cold> --wallet.hotkey <your hot>
```
4. Pull a model off the network.
```bash
af -vvv pull <uid to pull> --model_path <i.e. ./my_model>
```
5. Improve the model
```bash
... magic RL stuff ...
```
6. Push the model to your miner.
```bash
af -vvv push --coldkey <your cold> --hotkey <your hot> --model_path <i.e. ./my_model>
```
# SDK
Affine is also an SDK you can use to generate and evaluate models envs.
```python
import affine as af
# Optionally turn on logging
af.trace(); af.debug(); af.info()
# Get all miner info or only for UID =5
miners = await af.get_miners()
miner = await af.get_miners( 5 )
# Generate a SAT challenge
chal = await af.SAT.generate()
# Generate a bunch.
chals = await af.ABDUCTION().many( 10 )
chals = await af.DEDUCTION().many( 10 )
# Query the model directly.
# NOTE: A CHUTES_API_KEY .env value is required for this command.
response = await af.query( chal.prompt, model = miner.model )
# Evaluate the response
evaluation = chal.evaluate( response )
print( evaluation.score )
# Async generator of results from last 100 blocks.
async for res in af.rollouts(100):
print (res) # Result objects
```
|
Dimmotoro/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-twitchy_quiet_peacock
|
Dimmotoro
| 2025-09-02T09:46:24Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am twitchy_quiet_peacock",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-16T06:49:03Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am twitchy_quiet_peacock
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.