modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 00:38:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 551
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 00:38:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
srwmilerwhitchurchvtak/blockassist-bc-endangered_knobby_jellyfish_1757450728
|
srwmilerwhitchurchvtak
| 2025-09-09T20:45:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered knobby jellyfish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:45:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered knobby jellyfish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
costiganreanna/blockassist-bc-marine_muscular_puma_1757450693
|
costiganreanna
| 2025-09-09T20:45:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine muscular puma",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:45:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine muscular puma
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raniero/ares56-test-chat
|
raniero
| 2025-09-09T20:44:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"bittensor",
"subnet-56",
"gradients",
"it",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T09:52:15Z |
---
language:
- it
license: apache-2.0
library_name: peft
tags: [lora, bittensor, subnet-56, gradients]
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# ARES56 — LoRA adapter
Upload ID: test-rows-short_1757450648
upload_id: unknown_1757404904
File inclusi:
- `adapter_model.safetensors` — SHA256: `23b92fcb87624c25260ead0c6b56d094705872712333e2eba69e2d1253f349ba`
- `adapter_config.json` — SHA256: `2820da1b7c4d78156662af4cb019fe87c637c027435442b522144a3ff0f78d26`
- `tokenizer_config.json` — SHA256: `27c5ddd03dd5e605959d3a0f6d4dcfc238e5475bbde941e8c358f3776ac1221b`
- `special_tokens_map.json` — SHA256: `82d96d7a9e6ced037f12394b7ea6a5b02e6ca87e0d11edaa8d60d9be857ce7db`
Output generato via Axolotl (CPU / smoke). Nessun checkpoint completo incluso.
|
KonradBRG/bert-lora-for-author-profiling
|
KonradBRG
| 2025-09-09T20:44:38Z | 60 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google-bert/bert-base-uncased",
"lora",
"transformers",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-08-28T12:58:25Z |
---
library_name: peft
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- base_model:adapter:google-bert/bert-base-uncased
- lora
- transformers
model-index:
- name: bert-lora-for-author-profiling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-lora-for-author-profiling
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7851
- Age Acc: 0.5879
- Age Precision: 0.5488
- Age Recall: 0.5879
- Age F1: 0.5327
- Age Precision Macro: 0.4821
- Age Recall Macro: 0.2772
- Age F1 Macro: 0.2900
- Gender Acc: 0.7031
- Gender Precision: 0.7033
- Gender Recall: 0.7031
- Gender F1: 0.7031
- Gender Precision Macro: 0.7031
- Gender Recall Macro: 0.7032
- Gender F1 Macro: 0.7031
- Joint Acc: 0.4211
- Avg Acc: 0.6455
- Avg Precision: 0.6260
- Avg Recall: 0.6455
- Avg F1: 0.6179
- Avg Precision Macro: 0.5926
- Avg Recall Macro: 0.4902
- Avg F1 Macro: 0.4965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.7145e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Age Acc | Age Precision | Age Recall | Age F1 | Age Precision Macro | Age Recall Macro | Age F1 Macro | Gender Acc | Gender Precision | Gender Recall | Gender F1 | Gender Precision Macro | Gender Recall Macro | Gender F1 Macro | Joint Acc | Avg Acc | Avg Precision | Avg Recall | Avg F1 | Avg Precision Macro | Avg Recall Macro | Avg F1 Macro |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:-------------:|:----------:|:------:|:-------------------:|:----------------:|:------------:|:----------:|:----------------:|:-------------:|:---------:|:----------------------:|:-------------------:|:---------------:|:---------:|:-------:|:-------------:|:----------:|:------:|:-------------------:|:----------------:|:------------:|
| 0.8322 | 0.5155 | 5000 | 0.8194 | 0.5681 | 0.5363 | 0.5681 | 0.5116 | 0.5057 | 0.2536 | 0.2604 | 0.6872 | 0.6874 | 0.6872 | 0.6873 | 0.6873 | 0.6873 | 0.6872 | 0.3950 | 0.6276 | 0.6119 | 0.6276 | 0.5994 | 0.5965 | 0.4704 | 0.4738 |
| 0.8081 | 1.0309 | 10000 | 0.8050 | 0.5788 | 0.5417 | 0.5788 | 0.5211 | 0.4631 | 0.2644 | 0.2736 | 0.6916 | 0.6936 | 0.6916 | 0.6911 | 0.6933 | 0.6922 | 0.6913 | 0.4047 | 0.6352 | 0.6177 | 0.6352 | 0.6061 | 0.5782 | 0.4783 | 0.4825 |
| 0.7988 | 1.5464 | 15000 | 0.7940 | 0.5838 | 0.5415 | 0.5838 | 0.5291 | 0.4497 | 0.2736 | 0.2844 | 0.6990 | 0.6995 | 0.6990 | 0.6989 | 0.6993 | 0.6992 | 0.6989 | 0.4150 | 0.6414 | 0.6205 | 0.6414 | 0.6140 | 0.5745 | 0.4864 | 0.4916 |
| 0.7966 | 2.0619 | 20000 | 0.7887 | 0.5857 | 0.5425 | 0.5857 | 0.5291 | 0.4576 | 0.2732 | 0.2850 | 0.7010 | 0.7018 | 0.7010 | 0.7009 | 0.7016 | 0.7013 | 0.7009 | 0.4178 | 0.6433 | 0.6222 | 0.6433 | 0.6150 | 0.5796 | 0.4873 | 0.4930 |
| 0.7914 | 2.5773 | 25000 | 0.7851 | 0.5879 | 0.5488 | 0.5879 | 0.5327 | 0.4821 | 0.2772 | 0.2900 | 0.7031 | 0.7033 | 0.7031 | 0.7031 | 0.7031 | 0.7032 | 0.7031 | 0.4211 | 0.6455 | 0.6260 | 0.6455 | 0.6179 | 0.5926 | 0.4902 | 0.4965 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.22.0
|
acidjp/blockassist-bc-pesty_extinct_prawn_1757448336
|
acidjp
| 2025-09-09T20:44:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:44:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hartsellbrian/blockassist-bc-pawing_wiry_bee_1757450631
|
hartsellbrian
| 2025-09-09T20:44:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pawing wiry bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:44:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pawing wiry bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Joaocarlos123/Game1
|
Joaocarlos123
| 2025-09-09T20:43:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T20:43:36Z |
---
license: apache-2.0
---
|
sandinozack/blockassist-bc-spotted_sniffing_mandrill_1757450541
|
sandinozack
| 2025-09-09T20:42:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted sniffing mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:42:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted sniffing mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sshan95/bioclinical-MediCoder-PROD
|
sshan95
| 2025-09-09T20:42:30Z | 0 | 0 | null |
[
"pytorch",
"bioclinical_medical_coder",
"region:us"
] | null | 2025-09-09T19:50:55Z |
# BioClinical Medical Coding Model
## Model Description
This is a BioClinicalModernBERT-based model for automated medical coding. The model predicts ICD-10-CM diagnosis codes and HCPCS/CPT procedure codes from clinical notes.
## Model Architecture
- **Base Model**: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
- **Training**: 3-phase fine-tuning approach
- Phase 1: Dense retrieval training
- Phase 2: Hard negative re-ranking
- Phase 3: Multi-label classification
- **Code Vocabulary**: 31794 modern medical codes
- **Performance**: F1-score: 0.80-0.88 on frequent codes
## Usage
```python
from inference import MedicalCodingPredictor
# Initialize predictor
predictor = MedicalCodingPredictor()
# Predict codes from clinical note
clinical_note = "Patient presents with chest pain and elevated cardiac enzymes..."
predictions = predictor.predict(clinical_note, threshold=0.5)
for pred in predictions:
print(f"Code: {pred['code']}")
print(f"Type: {pred['type']}")
print(f"Description: {pred['description']}")
print(f"Confidence: {pred['confidence']:.3f}")
```
## API Response Format
```json
{
"code": "I25.111",
"type": "ICD-10-CM",
"description": "CODE DESCRIPTION",
"confidence": 0.85,
"f1_score": 0.82
}
```
## Files Included
- `pytorch_model.bin`: Model weights
- `config.json`: Model configuration
- `code_to_idx.json`: Code to index mapping
- `idx_to_code.json`: Index to code mapping
- `code_descriptions.json`: Code descriptions
- `code_f1_scores.json`: Per-code F1 scores
- `inference.py`: Inference script
- `requirements.txt`: Dependencies
## Training Data
Trained on MIMIC-IV clinical notes with temporal matching for accurate code assignment.
## Limitations
- Generic code descriptions (update with medical terminology database)
- Performance varies by code frequency
- Requires clinical validation for production use
## Citation
If you use this model, please cite the MIMIC-IV dataset and acknowledge the multi-stage training approach.
|
bah63843/blockassist-bc-plump_fast_antelope_1757450487
|
bah63843
| 2025-09-09T20:42:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:42:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ankurkul86/tinyllama-finder-poc
|
ankurkul86
| 2025-09-09T20:41:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-09T20:37:24Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: tinyllama-finder-poc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-finder-poc
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mar5-a/gptoss20b-sft
|
mar5-a
| 2025-09-09T20:40:35Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T20:16:10Z |
# GPT-OSS-20B CIF-LITE Fine-Tuned (LoRA Adapters)
This repo contains LoRA adapters fine-tuned with [Unsloth](https://github.com/unslothai/unsloth) for structured CIF-LITE block generation.
---
## Quick Start
Install dependencies:
```bash
pip install unsloth transformers peft accelerate bitsandbytes
### Direct Use
#Straight plug in to a jupyter notebook
# install deps (if not already in your venv)
!pip install unsloth transformers peft accelerate bitsandbytes
import torch
from unsloth import FastLanguageModel
from peft import PeftModel
# 1) load the base model in 4-bit (same as training)
base, tokenizer = FastLanguageModel.from_pretrained(
"unsloth/gpt-oss-20b",
load_in_4bit=True,
max_seq_length=896, # match what you trained with
dtype=None,
full_finetuning=False,
)
# 2) attach your fine-tuned adapters from Hugging Face
model = PeftModel.from_pretrained(base, "mar5-a/gptoss20b-sft")
model.eval()
# 3) this is a quick test to check its abilities
messages = [
{"role": "system", "content": "You are a materials design assistant. Return only the required CIF-LITE block."},
{"role": "user", "content": "Constraints: Allowed A-site ions: MA, Allowed B-site ions: Pb, Allowed X-site ions: I, Band gap window (eV): 1.5 - 1.7, Minimum stability (T80): 75 hours, Preferred dimension: 3D"},
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False, reasoning_effort="low")
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
with torch.no_grad():
out = model.generate(**inputs, max_new_tokens=256, temperature=0.2, top_p=0.9)
print(tokenizer.decode(out[0], skip_special_tokens=True))
|
sattari/phi-4-finetunned-event-arg
|
sattari
| 2025-09-09T20:39:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:16:59Z |
---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sattari
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WakandaAI/stt_rw_conformer_transducer_large
|
WakandaAI
| 2025-09-09T20:39:30Z | 0 | 0 |
nemo
|
[
"nemo",
"pytorch",
"NeMo",
"license:cc-by-4.0",
"region:us"
] | null | 2025-09-09T19:27:41Z |
---
library_name: nemo
license: cc-by-4.0
tags:
- pytorch
- NeMo
---
# Stt Rw Conformer Transducer Large
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
**Put a short model description here.**
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
**NOTE**: Please update the model class below to match the class of the model being uploaded.
```python
import nemo.core import ModelPT
model = ModelPT.from_pretrained("WakandaAI/stt_rw_conformer_transducer_large")
```
### NOTE
Add some information about how to use the model here. An example is provided for ASR inference below.
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="WakandaAI/stt_rw_conformer_transducer_large" audio_dir=""
```
### Input
**Add some information about what are the inputs to this model**
### Output
**Add some information about what are the outputs of this model**
## Model Architecture
**Add information here discussing architectural details of the model or any comments to users about the model.**
## Training
**Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.**
### NOTE
An example is provided below for ASR
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
**Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).**
### NOTE
An example for the manifest section is provided below for ASR datasets
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
The corresponding text in this section for those datasets is stated below -
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
- People's Speech - 12,000 hour subset
## Performance
**Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it.
### NOTE
An example is provided below for ASR metrics list that can be added to the top of the README
model-index:
- name: PUT_MODEL_NAME
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.10
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 14.11
Provide any caveats about the results presented in the top of the discussion so that nuance is not lost.
It should ideally be in a tabular format (you can use the following website to make your tables in markdown format - https://www.tablesgenerator.com/markdown_tables)**
## Limitations
**Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).**
### Note
An example is provided below
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## License
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
## References
**Provide appropriate references in the markdown link format below. Please order them numerically.**
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
shanearora/2025-sep-a-base-model
|
shanearora
| 2025-09-09T20:39:25Z | 0 | 0 | null |
[
"safetensors",
"olmo3",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T20:22:59Z |
---
license: apache-2.0
---
|
ryguyitfg/blockassist-bc-fleecy_horned_sloth_1757450344
|
ryguyitfg
| 2025-09-09T20:39:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy horned sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy horned sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
adnahheinsennis/blockassist-bc-running_meek_caribou_1757450311
|
adnahheinsennis
| 2025-09-09T20:38:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"running meek caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:38:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- running meek caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boonpertou/blockassist-bc-shiny_hardy_stork_1757450274
|
boonpertou
| 2025-09-09T20:38:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny hardy stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:37:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny hardy stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-elusive_mammalian_termite_1757450266
|
AnerYubo
| 2025-09-09T20:37:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive mammalian termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:37:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive mammalian termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
muritesha/blockassist-bc-tropical_galloping_caterpillar_1757450257
|
muritesha
| 2025-09-09T20:37:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical galloping caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:37:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical galloping caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aquigpt/open0-2.5
|
aquigpt
| 2025-09-09T20:37:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ns",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2025-09-07T20:49:21Z |
---
license: mit
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ns
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: transformers
inference: false
base_model: qwen/Qwen2.5-32B
---
<style>
:root{
--bg: #0b0c0f;
--panel: #0f1117;
--ink: #e9eefc;
--muted: #9aa3b2;
--brand: #a54c87; /* pink/magenta */
--brand-2: #c65ba0; /* lighter pink accent */
--border: rgba(255,255,255,.08);
--glow: rgba(165,76,135,.25);
--radius: 16px;
}
*{ box-sizing: border-box }
body{ margin: 0; padding: 28px; background: var(--bg); color: var(--muted); font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; }
.card{
background: linear-gradient(180deg,rgba(255,255,255,.02),rgba(255,255,255,.00));
border:1px solid var(--border);
border-radius: var(--radius);
padding:16px;
}
.badge{
display:inline-flex;align-items:center;gap:.5rem;
padding:.35rem .6rem;border:1px solid var(--border);border-radius:999px;
color:var(--muted);font-size:.85rem
}
.grid{ display:grid; gap:18px }
.grid-2{ grid-template-columns:repeat(2,minmax(0,1fr)); }
.grid-3{ grid-template-columns:repeat(3,minmax(0,1fr)); }
@media(max-width:900px){ .grid-2,.grid-3{ grid-template-columns:1fr } }
.kicker{
display:inline-block;letter-spacing:.12em;text-transform:uppercase;
color:var(--muted);font-size:.75rem;margin-bottom:.5rem
}
h1,h2,h3{ color:var(--ink); margin:0 0 .4rem 0; line-height:1.1 }
h1{ font-size:2.25rem; font-weight:800 }
h2{ font-size:1.3rem; font-weight:700 }
h3{ font-size:1.05rem; font-weight:700 }
p,li{ color:var(--muted); line-height:1.6 }
hr{ border:none; height:1px; background:var(--border); margin:28px 0 }
a.btn{
display:inline-block; padding:.7rem 1rem; border-radius:12px;
background: linear-gradient(180deg,var(--brand),#8a3f70);
color:var(--ink); text-decoration:none; font-weight:600;
box-shadow: 0 10px 30px var(--glow);
}
a.btn.ghost{
background:transparent; color:var(--ink); border:1px solid var(--border)
}
kbd{
background:#0c1322;color:#cfe0ff;border:1px solid #1a2742;border-bottom-color:#142138;
padding:.12rem .4rem;border-radius:6px;font-size:.85rem
}
.codeblock{
background:#0b1220;border:1px solid #15233d;border-radius:12px;padding: 8px;overflow:auto;
margin: 1rem 0;
}
.codeblock pre {
margin: 0;
color: var(--ink);
}
.tagline{
font-size:1.05rem;color:#c6d5ff
}
.pill{
display:inline-flex;align-items:center;gap:.4rem;
padding:.35rem .6rem;border-radius:999px;border:1px dashed var(--border);color:#b9c5db
}
.hero{
background:
radial-gradient(600px 240px at 20% 0%,rgba(165,76,135,.18),transparent 60%),
radial-gradient(600px 240px at 80% 10%,rgba(198,91,160,.12),transparent 60%);
border:1px solid var(--border);
border-radius:20px; padding:28px
}
details{
border:1px solid var(--border);border-radius:12px;padding:14px;background:rgba(255,255,255,.02)
}
summary{ cursor:pointer;color:var(--ink);font-weight:700 }
blockquote{
margin:0;padding:14px;border-left:3px solid var(--brand);background:rgba(165,76,135,.06);
border-radius:0 10px 10px 0;color:#e596c8
}
table{ width:100%; border-collapse:collapse; margin: 1rem 0; }
th,td{ text-align:left; padding:10px; border-bottom:1px solid var(--border); color:var(--muted); font-size: .9rem; }
th{ color:var(--brand-2); font-weight: 700; }
.callout{
border:1px solid var(--border);border-radius:14px;padding:14px;background:rgba(255,255,255,.02)
}
.metadata{
background: #0a0b0e; border: 1px solid var(--border); border-radius: 12px;
padding: 16px; margin-bottom: 24px; font-family: 'Monaco', 'Menlo', monospace;
font-size: .85rem; color: #8a91a3;
}
</style>
<div class="hero">
<div class="kicker">Quantization-Aware Model</div>
<h1>Aqui-open0-2.5</h1>
<p class="tagline">The first quantization-aware model from Aqui Solutions, built on Qwen2.5 architecture with extended thinking capabilities. Delivering exceptional performance with ultra-low VRAM usage through native 8-bit optimization.</p>
<div style="margin-top: 20px; display: flex; gap: 12px; flex-wrap: wrap;">
<div class="pill">🧠 Extended Thinking</div>
<div class="pill">⚡ 8-Bit Native</div>
<div class="pill">🔓 MIT Licensed</div>
<div class="pill">💾 Low VRAM</div>
</div>
</div>
<div class="card" style="margin-top: 28px;">
<h2>open0-2.5-32B</h2>
<p>Revolutionary quantization-aware model based on Qwen2.5-32B with extended thinking capabilities, optimized for 8-bit inference from the ground up.</p>
<div style="margin: 16px 0;">
<div class="badge">🧠 32B parameters</div>
<div class="badge">⚡ 8-bit quantized</div>
<div class="badge">💾 30.4 GiB VRAM</div>
<div class="badge">🎯 Extended thinking</div>
</div>
<a href="https://huggingface.co/aquigpt/open0-2.5" class="btn">View Model</a>
</div>
<div class="callout" style="margin: 28px 0;">
<h3>🚀 Breakthrough in Efficiency</h3>
<p><strong>First Quantization-Aware Model</strong> — Unlike traditional post-training quantization, our model was designed and trained with 8-bit precision in mind, delivering superior performance with dramatically reduced memory requirements.</p>
</div>
<hr>
<h2>Benchmark Performance</h2>
<p><em>All evaluations performed in 8-bit quantization for open0-2.5 and full precision for others.</em></p>
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Aqui-open0-2.5 32B</th>
<th>Qwen3 2507 235B</th>
<th>DeepSeek V3.1 Think 685B</th>
<th>GLM-4.5 358B</th>
<th>EXAONE 4.0 32B</th>
<th>KAT-V1-40B</th>
<th>Hermes 4 405B</th>
</tr>
</thead>
<tbody>
<tr><td>MMLU-Pro</td><td>84.1</td><td><strong>84.3</strong></td><td>85.1</td><td>83.5</td><td>81.8</td><td>78.9</td><td>80.5</td></tr>
<tr><td>GPQA Diamond</td><td><strong>78.2</strong></td><td>79.0</td><td>77.9</td><td>78.2</td><td>73.9</td><td>72.5</td><td>70.5</td></tr>
<tr><td>Humanity's Last Exam</td><td><strong>16.7</strong></td><td>15.0</td><td>13.0</td><td>12.2</td><td>10.5</td><td>7.8</td><td>9.7</td></tr>
<tr><td>LiveCodeBench</td><td>72.4</td><td><strong>78.8</strong></td><td>78.4</td><td>73.8</td><td>74.7</td><td>69.5</td><td>61.3</td></tr>
<tr><td>AIME 2025</td><td>86.9</td><td><strong>91.0</strong></td><td>89.7</td><td>73.7</td><td>80.0</td><td>81.5</td><td>78.1</td></tr>
<tr style="border-top: 2px solid var(--brand);"><td><strong>Artificial Analysis Intelligence Index</strong></td><td><strong>54.77</strong></td><td>57.47</td><td>53.95</td><td>49.44</td><td>42.64</td><td>43.67</td><td>41.57</td></tr>
</tbody>
</table>
<h3>VRAM Efficiency Comparison</h3>
<table>
<thead>
<tr>
<th>Model</th>
<th>VRAM Usage (GiB)</th>
<th>Parameters</th>
</tr>
</thead>
<tbody>
<tr><td><strong>Aqui-open0-2.5 32B</strong></td><td><strong>30.4</strong></td><td>32B</td></tr>
<tr><td>Qwen3 2507 235B</td><td>41.0</td><td>235B</td></tr>
<tr><td>DeepSeek V3.1 Think 685B</td><td>59.6</td><td>685B</td></tr>
<tr><td>GLM-4.5 358B</td><td>59.6</td><td>358B</td></tr>
<tr><td>EXAONE 4.0 32B</td><td>68.9</td><td>32B</td></tr>
<tr><td>KAT-V1-40B</td><td>74.5</td><td>40B</td></tr>
<tr><td>Hermes 4 405B</td><td>754.4</td><td>405B</td></tr>
</tbody>
</table>
<hr>
<h2>Key Features</h2>
<div class="grid grid-2">
<div class="card">
<h3>🧠 Extended Thinking</h3>
<p>Built upon Qwen2.5 architecture with enhanced reasoning capabilities through extended thinking mechanisms.</p>
</div>
<div class="card">
<h3>⚡ Quantization-Aware Training</h3>
<p>First model from Aqui Solutions designed specifically for 8-bit inference, maintaining performance while drastically reducing memory usage.</p>
</div>
<div class="card">
<h3>💾 Ultra-Low VRAM</h3>
<p>Runs efficiently on consumer hardware with only 30.4 GiB VRAM requirement, making advanced AI accessible to more users.</p>
</div>
<div class="card">
<h3>🔓 MIT Licensed</h3>
<p>Complete freedom for commercial use, modification, and redistribution with minimal restrictions.</p>
</div>
</div>
<hr>
<h2>Usage</h2>
<div class="codeblock">
<pre>
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the model and tokenizer in 8-bit
tokenizer = AutoTokenizer.from_pretrained("aquigpt/open0-2.5")
model = AutoModelForCausalLM.from_pretrained(
"aquigpt/open0-2.5",
load_in_8bit=True,
device_map="auto"
)
# Generate text
inputs = tokenizer("Solve this complex reasoning problem:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
</pre>
</div>
<details>
<summary>Training Details</summary>
<p>The open0-2.5 model was built upon Qwen2.5-32B with significant enhancements:</p>
<ul>
<li>Extended thinking capabilities through architectural modifications</li>
<li>Quantization-aware training from initialization</li>
<li>Advanced fine-tuning on reasoning and mathematical datasets</li>
<li>Optimized for 8-bit inference without performance degradation</li>
<li>Constitutional AI alignment for safe and helpful responses</li>
</ul>
</details>
<blockquote>
<strong>Note:</strong> This model represents a breakthrough in efficient AI deployment. All benchmark results were obtained using 8-bit quantization, demonstrating the effectiveness of our quantization-aware training approach.
</blockquote>
<div style="text-align: center; margin-top: 40px; color: var(--muted);">
<p>Built with ❤️ by Aqui Solutions • MIT • September 2025</p>
</div>
|
AnerYubo/blockassist-bc-snappy_tenacious_eagle_1757450258
|
AnerYubo
| 2025-09-09T20:37:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snappy tenacious eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:37:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snappy tenacious eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afsanakhatun76473/blockassist-bc-gentle_strong_cat_1757450232
|
afsanakhatun76473
| 2025-09-09T20:37:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle strong cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:37:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle strong cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF
|
mradermacher
| 2025-09-09T20:36:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:liu-nlp/salamandra-2b-smol-smoltalk-sv-en",
"base_model:quantized:liu-nlp/salamandra-2b-smol-smoltalk-sv-en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T20:08:26Z |
---
base_model: liu-nlp/salamandra-2b-smol-smoltalk-sv-en
language:
- en
library_name: transformers
model_name: salamandra-2b-smol-smoltalk-sv-en
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/liu-nlp/salamandra-2b-smol-smoltalk-sv-en
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#salamandra-2b-smol-smoltalk-sv-en-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q3_K_S.gguf) | Q3_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q3_K_M.gguf) | Q3_K_M | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q3_K_L.gguf) | Q3_K_L | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.IQ4_XS.gguf) | IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q4_K_S.gguf) | Q4_K_S | 1.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q4_K_M.gguf) | Q4_K_M | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q5_K_S.gguf) | Q5_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q5_K_M.gguf) | Q5_K_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q6_K.gguf) | Q6_K | 2.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.Q8_0.gguf) | Q8_0 | 2.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-smol-smoltalk-sv-en-GGUF/resolve/main/salamandra-2b-smol-smoltalk-sv-en.f16.gguf) | f16 | 4.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tjsvdicfaslism/blockassist-bc-keen_bellowing_crocodile_1757450081
|
tjsvdicfaslism
| 2025-09-09T20:34:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen bellowing crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:34:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen bellowing crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gojhedgepethcritesrhhn/blockassist-bc-darting_hulking_grouse_1757450061
|
gojhedgepethcritesrhhn
| 2025-09-09T20:34:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting hulking grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:34:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting hulking grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boelkeguadalupe/blockassist-bc-lumbering_striped_caribou_1757450035
|
boelkeguadalupe
| 2025-09-09T20:34:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering striped caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:34:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering striped caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757449827
|
aronlg
| 2025-09-09T20:31:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:31:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoggcatharine/blockassist-bc-sleek_shy_moose_1757449834
|
hoggcatharine
| 2025-09-09T20:30:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sleek shy moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:30:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sleek shy moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
squirreln/q_lora_korqa_
|
squirreln
| 2025-09-09T20:30:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:30:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luiskodraje/blockassist-bc-climbing_quick_reindeer_1757449687
|
luiskodraje
| 2025-09-09T20:29:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing quick reindeer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:29:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing quick reindeer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1757447898
|
hakimjustbao
| 2025-09-09T20:29:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:29:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ermiragollifg71/blockassist-bc-squeaky_beaked_moose_1757449648
|
ermiragollifg71
| 2025-09-09T20:27:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky beaked moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:27:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky beaked moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/Qwen3-0.6B-bnpo3-f16-200
|
moyixiao
| 2025-09-09T20:26:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T20:26:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jdevasier/phi4-fsp
|
jdevasier
| 2025-09-09T20:26:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:adapter:unsloth/phi-4-unsloth-bnb-4bit",
"region:us"
] | null | 2025-09-09T20:16:25Z |
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
Frame-semantic parsing model using Phi-4. (WIP)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Jacob Devasier
- **Model type:** Phi-4
- **Language(s) (NLP):** English
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
HarryStot/ppo-Huggy
|
HarryStot
| 2025-09-09T20:26:28Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-09-09T20:26:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HarryStot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gauravpradeep/sep7_bottle_diffusion
|
gauravpradeep
| 2025-09-09T20:25:51Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:gauravpradeep/bottle_square_sept7_lerobot_diffusion",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-09T20:25:07Z |
---
datasets: gauravpradeep/bottle_square_sept7_lerobot_diffusion
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- diffusion
- lerobot
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
burgbobby/blockassist-bc-lithe_wild_boar_1757449510
|
burgbobby
| 2025-09-09T20:25:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lithe wild boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lithe wild boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1757448081
|
aleebaster
| 2025-09-09T20:25:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:24:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
r74760029/blockassist-bc-tiny_crested_baboon_1757449463
|
r74760029
| 2025-09-09T20:24:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tiny crested baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:24:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tiny crested baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
siouxluriekaile/blockassist-bc-deadly_peckish_hare_1757449429
|
siouxluriekaile
| 2025-09-09T20:24:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly peckish hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:24:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly peckish hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757447811
|
seams01
| 2025-09-09T20:23:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:23:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Maxlegrec/ChessBot
|
Maxlegrec
| 2025-09-09T20:23:55Z | 9 | 4 |
transformers
|
[
"transformers",
"safetensors",
"chessbot",
"feature-extraction",
"chess",
"game-ai",
"pytorch",
"custom_code",
"dataset:Maxlegrec/ChessFENS",
"license:mit",
"region:us"
] |
feature-extraction
| 2025-07-04T21:02:55Z |
---
license: mit
tags:
- chess
- game-ai
- pytorch
- safetensors
library_name: transformers
datasets:
- Maxlegrec/ChessFENS
---
# ChessBot Chess Model
This is a ChessBot model for chess move prediction and position evaluation. This model is way worse than stockfish. It is better than most humans however.
For stronger play, reducing temperature T (lower is stronger) is suggested.
## Model Description
The ChessBot model is a transformer-based architecture designed for chess gameplay. It can:
- Predict the next best move given a chess position (FEN)
- Evaluate chess positions
- Generate move probabilities
## Please Like if this model is useful to you :)
A like goes a long way !
## Usage
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("Maxlegrec/ChessBot", trust_remote_code=True)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Example usage
fen = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1"
# Sample move from policy
move = model.get_move_from_fen_no_thinking(fen, T=0.1, device=device)
print(f"Policy-based move: {move}")
#e2e4
# Get the best move using value analysis
value_move = model.get_best_move_value(fen, T=0, device=device)
print(f"Value-based move: {value_move}")
#e2e4
# Get position evaluation
position_value = model.get_position_value(fen, device=device)
print(f"Position value [black_win, draw, white_win]: {position_value}")
#[0.2318, 0.4618, 0.3064]
# Get move probabilities
probs = model.get_move_from_fen_no_thinking(fen, T=1, device=device, return_probs=True)
top_moves = sorted(probs.items(), key=lambda x: x[1], reverse=True)[:5]
print("Top 5 moves:")
for move, prob in top_moves:
print(f" {move}: {prob:.4f}")
#Top 5 moves:
# e2e4: 0.9285
# d2d4: 0.0712
# g1f3: 0.0001
# e2e3: 0.0000
# c2c3: 0.0000
```
## Requirements
- torch>=2.0.0
- transformers>=4.48.1
- python-chess>=1.10.0
- numpy>=1.21.0
## Model Architecture
The architecture is strongly inspired from the LCzero project. Although written in pytorch.
- **Transformer layers**: 10
- **Hidden size**: 512
- **Feed-forward size**: 736
- **Attention heads**: 8
- **Vocabulary size**: 1929 (chess moves)
## Training Data
This model was trained on training data from the LCzero project. It consists of around 750M chess positions. I will publish the training dataset very soon.
## Limitations
- The model works best with standard chess positions
- Performance may vary with unusual or rare positions
- Requires GPU for optimal inference speed
|
luckycanucky/llama3-3B-toxic-hui
|
luckycanucky
| 2025-09-09T20:22:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"base_model:finetune:huihui-ai/Llama-3.2-3B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:18:46Z |
---
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** huihui-ai/Llama-3.2-3B-Instruct-abliterated
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
garriottmira/blockassist-bc-bipedal_tawny_newt_1757449363
|
garriottmira
| 2025-09-09T20:22:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal tawny newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:22:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal tawny newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757449320
|
vendi11
| 2025-09-09T20:22:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:22:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757449205
|
cwayneconnor
| 2025-09-09T20:22:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:21:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
brandescarpello553/blockassist-bc-shiny_graceful_lion_1757449312
|
brandescarpello553
| 2025-09-09T20:21:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny graceful lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:21:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny graceful lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alesandrkodrabe/blockassist-bc-patterned_scruffy_rat_1757449284
|
alesandrkodrabe
| 2025-09-09T20:21:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned scruffy rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:21:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned scruffy rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
leeooo001/Hunyuan-PromptEnhancer-INT8
|
leeooo001
| 2025-09-09T20:21:28Z | 0 | 0 | null |
[
"safetensors",
"hunyuan_v1_dense",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-09T19:47:57Z |
* My INT8 model for HunYuan PromptEnhancer for comfyui
* https://github.com/leeooo001/comfyui-Hunyuan-PromptEnhancer
* https://github.com/Hunyuan-PromptEnhancer/PromptEnhancer
* https://huggingface.co/tencent/HunyuanImage-2.1/tree/main/reprompt
---
license: apache-2.0
---
|
acidjp/blockassist-bc-humming_rugged_viper_1757447314
|
acidjp
| 2025-09-09T20:21:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"humming rugged viper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:21:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- humming rugged viper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
slatinlatrina/blockassist-bc-mammalian_sneaky_prawn_1757449257
|
slatinlatrina
| 2025-09-09T20:21:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian sneaky prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:21:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian sneaky prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
baseandelsacul/blockassist-bc-sniffing_scampering_camel_1757449234
|
baseandelsacul
| 2025-09-09T20:20:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing scampering camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:20:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing scampering camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sensmeierbrenton/blockassist-bc-silky_solitary_boar_1757449199
|
sensmeierbrenton
| 2025-09-09T20:20:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky solitary boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:20:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky solitary boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nick976786/Qwen3-0.6B-Gensyn-Swarm-monstrous_bristly_gibbon
|
Nick976786
| 2025-09-09T20:18:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am monstrous_bristly_gibbon",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T19:03:09Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am monstrous_bristly_gibbon
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
negersdrahimi/blockassist-bc-dense_squeaky_iguana_1757449112
|
negersdrahimi
| 2025-09-09T20:18:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dense squeaky iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:18:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dense squeaky iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ryguyitfg/blockassist-bc-fleecy_horned_sloth_1757449085
|
ryguyitfg
| 2025-09-09T20:18:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy horned sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:18:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy horned sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nessaislebobbi/blockassist-bc-hairy_burrowing_crow_1757449057
|
nessaislebobbi
| 2025-09-09T20:17:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy burrowing crow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:17:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy burrowing crow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
martin2012/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-zealous_winged_locust
|
martin2012
| 2025-09-09T20:16:27Z | 158 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am zealous_winged_locust",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T12:03:37Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am zealous_winged_locust
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
costiganreanna/blockassist-bc-marine_muscular_puma_1757448864
|
costiganreanna
| 2025-09-09T20:15:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine muscular puma",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:15:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine muscular puma
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tomal66/gemma3-1b-fpt-sft-blp1b
|
tomal66
| 2025-09-09T20:13:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T13:23:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
boonpertou/blockassist-bc-durable_marine_bee_1757448767
|
boonpertou
| 2025-09-09T20:13:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"durable marine bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:12:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- durable marine bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tere359/ppo-LunarLander-v2
|
tere359
| 2025-09-09T20:13:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-09T20:05:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 265.16 +/- 19.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pytorch/Qwen3-32B-FP8
|
pytorch
| 2025-09-09T20:11:46Z | 70 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen3",
"text-generation",
"torchao",
"code",
"math",
"chat",
"conversational",
"multilingual",
"arxiv:2507.16099",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-07T21:45:44Z |
---
library_name: transformers
tags:
- torchao
- code
- math
- chat
license: apache-2.0
language:
- multilingual
base_model:
- Qwen/Qwen3-32B
pipeline_tag: text-generation
---
[Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) float8 dynamic activation and float8 weight quantization (per row granularity), by PyTorch team. Use it directly, or serve using [vLLM](https://docs.vllm.ai/en/latest/) with 47% VRAM reduction (34.54 GB needed), around 1.7x speedup and little to no accuracy impact on H100.
# Inference with vLLM
```Shell
# Server
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve pytorch/Qwen3-32B-FP8 --tokenizer Qwen/Qwen3-32B -O3
```
```Shell
# Client
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "pytorch/Qwen3-32B-FP8",
"messages": [
{"role": "user", "content": "Give me a short introduction to large language models."}
],
"temperature": 0.6,
"top_p": 0.95,
"top_k": 20,
"max_tokens": 32768
}'
```
# Inference with transformers
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "pytorch/Qwen3-32B-FP8"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
# Quantization Recipe
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install torchao
pip install torch
pip install accelerate
```
Use the following code to get the float8 model using torchao library:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "Qwen/Qwen3-32B"
from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, PerRow
quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow())
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Optionally, upload to your HF hub
```Py
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-FP8"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
```
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
| Benchmark | | |
|----------------------------------|----------------|---------------------------|
| | Qwen3-32B | Qwen3-32B-FP8 |
| **General** | | |
| mmlu | 80.71 | 80.67 |
| bbh | 37.49 | 38.01 |
| **Multilingual** | | |
| mgsm_en_cot_es | 58.4 | 52.0 |
| **Math** | | |
| gpqa_main_zeroshot | 41.96 | 42.63 |
| **Overall** | 54.64 | 53.33 |
<details>
<summary> Reproduce Model Quality Results </summary>
Need to install lm-eval from source:
https://github.com/EleutherAI/lm-evaluation-harness#install
## baseline
```Shell
lm_eval --model hf --model_args pretrained=Qwen/Qwen3-32B --tasks mmlu --device cuda:0 --batch_size 8
```
## float8 dynamic quantization (FP8)
```Shell
export MODEL=pytorch/Qwen3-32B-FP8
# or
# export MODEL=Qwen/Qwen3-32B
lm_eval --model hf --model_args pretrained=$MODEL --tasks mmlu --device cuda:0 --batch_size 8
```
</details>
# Memory Usage
| Memory (tested on H100) | | |
|----------------------------------|----------------|-------------------------------|
| | Qwen3-32B | Qwen3-32B-FP8 |
| Peak Memory | 65.63 GB | 34.71 GB (47.1% reduction) |
<details>
<summary> Reproduce Peak Memory Usage Results </summary>
Code
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-32B" # pytorch/Qwen3-32B-FP8
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
torch.cuda.reset_peak_memory_stats()
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
mem = torch.cuda.max_memory_reserved() / 1e9
print(f"Peak Memory Usage: {mem:.02f} GB")
```
</details>
# Model Performance
| Benchmark (Tested on H100) | | |
|----------------------------------|----------------|-------------------------------|
| | Qwen3-32B | Qwen3-32B-FP8 |
| latency (batch_size=1) | 8.93s | 5.16s (1.73x speedup) |
| latency (batch_size=256) | 33.85s | 16.15s (2.10x speedup) |
<details>
<summary> Reproduce latency benchmarks </summary>
**1. Setup**
```Shell
git clone git@github.com:vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 pip install --editable .
```
**2. Latency benchmarking**
```Shell
export MODEL=Qwen/Qwen3-32B # or pytorch/Qwen3-32B-FP8
VLLM_DISABLE_COMPILE_CACHE=1 python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model $MODEL --batch-size 1
```
</details>
# Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization
The model's quantization is powered by **TorchAO**, a framework presented in the paper [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https://huggingface.co/papers/2507.16099).
**Abstract:** We present TorchAO, a PyTorch-native model optimization framework leveraging quantization and sparsity to provide an end-to-end, training-to-serving workflow for AI models. TorchAO supports a variety of popular model optimization techniques, including FP8 quantized training, quantization-aware training (QAT), post-training quantization (PTQ), and 2:4 sparsity, and leverages a novel tensor subclass abstraction to represent a variety of widely-used, backend agnostic low precision data types, including INT4, INT8, FP8, MXFP4, MXFP6, and MXFP8. TorchAO integrates closely with the broader ecosystem at each step of the model optimization pipeline, from pre-training (TorchTitan) to fine-tuning (TorchTune, Axolotl) to serving (HuggingFace, vLLM, SGLang, ExecuTorch), connecting an otherwise fragmented space in a single, unified workflow. TorchAO has enabled recent launches of the quantized Llama 3.2 1B/3B and LlamaGuard3-8B models and is open-source at this https URL .
# Resources
* **Official TorchAO GitHub Repository:** [https://github.com/pytorch/ao](https://github.com/pytorch/ao)
* **TorchAO Documentation:** [https://docs.pytorch.org/ao/stable/index.html](https://docs.pytorch.org/ao/stable/index.html)
# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.
|
andidedjag513/blockassist-bc-monstrous_subtle_kingfisher_1757448556
|
andidedjag513
| 2025-09-09T20:09:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous subtle kingfisher",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:09:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous subtle kingfisher
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vdbvsbgd/blockassist-bc-carnivorous_curious_crocodile_1757448521
|
vdbvsbgd
| 2025-09-09T20:08:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous curious crocodile",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:08:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous curious crocodile
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
albeeosmanelita/blockassist-bc-scurrying_slow_fox_1757448495
|
albeeosmanelita
| 2025-09-09T20:08:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying slow fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:08:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying slow fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bunchcissyniota/blockassist-bc-diving_lightfooted_clam_1757448468
|
bunchcissyniota
| 2025-09-09T20:07:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving lightfooted clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:07:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving lightfooted clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anaruio/mms-azb-discriminator
|
anaruio
| 2025-09-09T20:07:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:07:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yaelahnal/blockassist
|
yaelahnal
| 2025-09-09T20:07:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:47:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodriquezb087/blockassist-bc-dormant_pensive_cat_1757448413
|
rodriquezb087
| 2025-09-09T20:07:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant pensive cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:06:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant pensive cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1757446875
|
vwzyrraz7l
| 2025-09-09T20:06:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:06:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xnftraff/blockassist
|
xnftraff
| 2025-09-09T20:05:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly freckled deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:05:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly freckled deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
strangepilot6792/blockassist-bc-curious_peaceful_eel_1757448318
|
strangepilot6792
| 2025-09-09T20:05:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious peaceful eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:05:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious peaceful eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zamilaoela/blockassist-bc-singing_leaping_vulture_1757448297
|
zamilaoela
| 2025-09-09T20:05:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing leaping vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:05:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing leaping vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kenpath/telugu_qwen3-4b-instruct-2507_v0.01
|
kenpath
| 2025-09-09T20:04:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T19:45:56Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kenpath
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Viktor-01/blockassist-bc-leaping_humming_finch_1757445655
|
Viktor-01
| 2025-09-09T20:04:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:04:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cebbbopwq/blockassist-bc-large_sizable_donkey_1757448206
|
cebbbopwq
| 2025-09-09T20:03:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"large sizable donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- large sizable donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
agentlans/granite-3.3-2b-refiner
|
agentlans
| 2025-09-09T20:03:47Z | 5 | 0 | null |
[
"safetensors",
"granite",
"editing",
"revision",
"proofreading",
"essay",
"writing",
"academic",
"en",
"dataset:agentlans/high-quality-text-refinement",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:finetune:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T13:14:19Z |
---
license: apache-2.0
datasets:
- agentlans/high-quality-text-refinement
language:
- en
base_model:
- ibm-granite/granite-3.3-2b-instruct
tags:
- editing
- revision
- proofreading
- essay
- writing
- academic
---
# Granite 3.3 2B Text Refiner
Granite 3.3 2B improves writing by reorganizing ideas logically and removing unnecessary words and phrases. It produces clearer, more concise, and easier-to-understand text with greater impact.
## How to Use
Provide any English non-fiction text with a prompt. The prompt format is flexible and doesn't require the exact same wording.
```
Write clearly and coherently:
[TEXT]
```
The model outputs the revised text in XML format:
```xml
<output>[REVISED TEXT]</output>
```
<details>
<summary>Click here for example</summary>
**Input**
Write more clearly and concisely:
In today's increasingly complex, multifaceted, and interconnected modern society of the twenty-first century, the absolutely fundamental and critically vital importance of comprehensive education in all its various forms and applications simply cannot and must not be overstated or underestimated in any way, shape, or form, as it truly and genuinely serves as the essential cornerstone, the basic foundational bedrock, and the primary underlying framework upon which each and every successful civilization throughout the entirety of recorded human history has been carefully constructed, methodically built, and systematically established over the course of countless generations. Education, in its many diverse, varied, and multitudinous forms and manifestations that exist across different cultures, societies, and geographical regions around the world, has consistently and continuously played an absolutely crucial, indispensable, and irreplaceable role in actively shaping, molding, and influencing not merely individual minds, thoughts, and personalities on a personal level, but also entire communities, neighborhoods, nations, countries, and global societies as a complete and comprehensive whole unit. From the very earliest days and initial beginnings of ancient human civilization and societal development to the present moment in time that we currently find ourselves living in today, the ongoing pursuit of knowledge, wisdom, learning, and intellectual growth has remained a constant, consistent, unwavering, and perpetual force that continuously drives, propels, and motivates progress, innovation, advancement, and forward movement in virtually every single conceivable field, area, and domain of human endeavor, achievement, and accomplishment that one could possibly imagine or think of in their wildest dreams.
**Output**
`<output>In today's complex and interconnected world, the fundamental importance of education cannot be overstated. It serves as the essential foundation upon which successful civilizations have been built throughout history. Education, in its diverse forms, consistently shapes individuals, communities, and nations, influencing personal development and broader societal structures. Throughout human history, the pursuit of knowledge has driven progress, innovation, and advancement across all fields of human endeavor.</output>`
</details>
## Notes and Limitations
- Optimized for English non-fiction writing.
- Review outputs to confirm all key information and style are preserved.
- Best with moderate-length texts; very short or very long inputs may reduce effectiveness.
- Minimal changes for texts that are already concise, such as scientific papers and news articles.
## Training Hyperparameters
<details>
<summary>Click here</summary>
Pretraining and supervised finetuning (SFT) on the [agentlans/high-quality-text-refinement](https://huggingface.co/datasets/agentlans/high-quality-text-refinement) dataset.
- Epochs: 3.0
- Learning rate: 5e-5
- Cutoff length: 2048 tokens
- Batch size: 2
- NEFTune alpha: 5
- Pack sequences: on
- Use rslora
- Liger kernel
Pretraining LoRA:
- rank 8
- alpha 16
- dropout 0.75
SFT LoRA:
- rank 16
- alpha 32
- dropout 0.5
</details>
## Licence
Apache 2.0
|
boonpertou/blockassist-bc-downy_thorny_pheasant_1757448173
|
boonpertou
| 2025-09-09T20:03:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy thorny pheasant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:02:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy thorny pheasant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amblehamilmaude/blockassist-bc-hardy_wild_porcupine_1757448174
|
amblehamilmaude
| 2025-09-09T20:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hardy wild porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:02:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hardy wild porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jerryzh168/Phi-4-mini-instruct-INT4
|
jerryzh168
| 2025-09-09T20:02:59Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
] |
text-generation
| 2025-09-09T20:02:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cebbbopwq/blockassist-bc-yapping_shy_macaque_1757448145
|
cebbbopwq
| 2025-09-09T20:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping shy macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:02:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping shy macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fopoper/blockassist-bc-agile_reclusive_walrus_1757448082
|
fopoper
| 2025-09-09T20:01:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile reclusive walrus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:01:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile reclusive walrus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rosgar/gemma-3-12b-pt-adapters-ftf-text2sql
|
rosgar
| 2025-09-09T20:01:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/gemma-3-12b-pt-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-pt-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-08T14:43:09Z |
---
base_model: unsloth/gemma-3-12b-pt-unsloth-bnb-4bit
library_name: transformers
model_name: gemma-3-12b-pt-adapters-ftf-text2sql
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for gemma-3-12b-pt-adapters-ftf-text2sql
This model is a fine-tuned version of [unsloth/gemma-3-12b-pt-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-12b-pt-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rosgar/gemma-3-12b-pt-adapters-ftf-text2sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.56.1
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zaragozadarrick/blockassist-bc-beaked_gliding_toucan_1757448035
|
zaragozadarrick
| 2025-09-09T20:00:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked gliding toucan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:00:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked gliding toucan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
felixZzz/student_32b_len16k_custom_0908
|
felixZzz
| 2025-09-09T20:00:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T19:38:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757447971
|
aronlg
| 2025-09-09T20:00:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T20:00:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1757445807
|
NahedDom
| 2025-09-09T19:59:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:59:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757447888
|
vendi11
| 2025-09-09T19:58:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:58:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boonpertou/blockassist-bc-downy_tawny_hippo_1757447869
|
boonpertou
| 2025-09-09T19:58:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy tawny hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:57:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy tawny hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757447710
|
cwayneconnor
| 2025-09-09T19:57:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:56:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChandrilBasu/kesar
|
ChandrilBasu
| 2025-09-09T19:56:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-09T19:56:01Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/KuF5F0ObtfCivDQumO3Bx.jpeg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: kesar
---
# kesar
<Gallery />
## Trigger words
You should use `kesar` to trigger the image generation.
## Download model
[Download](/ChandrilBasu/kesar/tree/main) them in the Files & versions tab.
|
crabtreeftf/blockassist-bc-darting_mighty_panther_1757447733
|
crabtreeftf
| 2025-09-09T19:55:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting mighty panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:55:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting mighty panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chittickisaias/blockassist-bc-fishy_meek_baboon_1757447657
|
chittickisaias
| 2025-09-09T19:54:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy meek baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:54:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy meek baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cakir25/Portfolio-Former-v1
|
cakir25
| 2025-09-09T19:53:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] |
text-generation
| 2025-09-09T19:49:22Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
meekinsvyglkcedenoxyn/blockassist-bc-nocturnal_sneaky_porpoise_1757447606
|
meekinsvyglkcedenoxyn
| 2025-09-09T19:53:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal sneaky porpoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:53:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal sneaky porpoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jtfhhhtfhugh/blockassist-bc-shaggy_shiny_gazelle_1757447580
|
jtfhhhtfhugh
| 2025-09-09T19:53:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy shiny gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:53:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy shiny gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fopoper/blockassist-bc-rabid_bold_hare_1757447555
|
fopoper
| 2025-09-09T19:52:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid bold hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:52:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid bold hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
enrikhoxha421/blockassist-bc-burrowing_invisible_raven_1757447545
|
enrikhoxha421
| 2025-09-09T19:52:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing invisible raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T19:52:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing invisible raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.