modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 06:28:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757263129
|
Vasya777
| 2025-09-07T16:39:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:39:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Reihaneh/wav2vec2_fi_mono_50_epochs_5
|
Reihaneh
| 2025-09-07T16:38:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T16:38:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757261590
|
seams01
| 2025-09-07T16:37:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:37:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757262985
|
Stasonelison
| 2025-09-07T16:37:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:37:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1757261546
|
GroomerG
| 2025-09-07T16:34:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:34:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/MistralSmall22bAlpacaContinued-i1-GGUF
|
mradermacher
| 2025-09-07T16:34:06Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-07T14:47:48Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/AlSamCur123/MistralSmall22bAlpacaContinued
|
bah63843/blockassist-bc-plump_fast_antelope_1757262461
|
bah63843
| 2025-09-07T16:28:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:28:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1757260308
|
NahedDom
| 2025-09-07T16:26:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:26:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
msaifee/Llama-2-7b-chat-finetune
|
msaifee
| 2025-09-07T16:25:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-07T16:08:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sophi-e-Ra-in-Spide-rman-Video-Ofi-cia-l/Sophie.Rain.Spiderman.Video.Oficial
|
Sophi-e-Ra-in-Spide-rman-Video-Ofi-cia-l
| 2025-09-07T16:24:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T16:12:20Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF
|
mradermacher
| 2025-09-07T16:19:58Z | 16 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"dataset:piyawudk/spam-ham-reasoning-dataset-small",
"base_model:piyawudk/PhishMe-Qwen3-Base-8B-SFT",
"base_model:quantized:piyawudk/PhishMe-Qwen3-Base-8B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-16T08:01:09Z |
---
base_model: piyawudk/PhishMe-Qwen3-Base-8B-SFT
datasets:
- piyawudk/spam-ham-reasoning-dataset-small
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/piyawudk/PhishMe-Qwen3-Base-8B-SFT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PhishMe-Qwen3-Base-8B-SFT-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-8B-SFT-GGUF/resolve/main/PhishMe-Qwen3-Base-8B-SFT.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DiFors/blockassist-bc-singing_sizable_snake_1757261959
|
DiFors
| 2025-09-07T16:19:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:19:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Arko007/Diabetic-Retinopathy
|
Arko007
| 2025-09-07T16:19:48Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-09-06T14:50:00Z |
license: mit
language: en
tags:
image-classification
medical-imaging
diabetic-retinopathy
resnet
fine-tuning
progressive-resizing
sih-2025
base_model: microsoft/resnet-50
Progressively Resized ResNet50 for Diabetic Retinopathy Grading
This repository contains a collection of ResNet50 models fine-tuned for classifying diabetic retinopathy severity. These models are the result of an advanced, multi-stage progressive resizing experiment.
The strategy involves starting with a fine-tuned model and continuing to train it on progressively higher image resolutions. This allows the model to first learn general features on smaller images and then refine its understanding by learning fine-grained details from larger, higher-quality images.
Model Versions
This repository contains several model checkpoints, each representing the best-performing model at a specific resolution stage. The final model from the highest resolution stage represents the culmination of this experiment.
best_model_384px.pth: Fine-tuned on 384x384 images.
best_model_512px.pth: Fine-tuned on 512x512 images.
best_model_768px.pth: Fine-tuned on 768x768 images.
best_model_1024px.pth: The final model, fine-tuned on 1024x1024 images.
Performance (Final Model)
The final model's performance was evaluated on the official test set from the IDRiD dataset.
Classification Report
precision recall f1-score support
Grade 0 0.76 0.65 0.70 34
Grade 1 0.11 0.40 0.17 5
Grade 2 0.59 0.59 0.59 32
Grade 3 0.64 0.47 0.55 19
Grade 4 0.40 0.31 0.35 13
accuracy 0.54 103
macro avg 0.50 0.48 0.47 103
weighted avg 0.61 0.54 0.57 103
Confusion Matrix
Grade 0 Grade 1 Grade 2 Grade 3 Grade 4
Grade 0 22 10 2 0 0
Grade 1 2 2 1 0 0
Grade 2 4 4 19 3 2
Grade 3 0 2 4 9 4
Grade 4 1 0 6 2 4
How to Use a Specific Model
You can load any of the model versions using PyTorch. Make sure to use the correct filename.
import torch
from torchvision import models
from huggingface_hub import hf_hub_download
# 1. Define the model architecture
model = models.resnet50(weights=None)
model.fc = torch.nn.Linear(model.fc.in_features, 5) # 5 classes
# 2. Load the fine-tuned weights for the desired resolution
weights_path = hf_hub_download(
repo_id="Arko007/Diabetic-Retinopathy",
filename="best_model_1024px.pth" # Change this to load other versions
)
model.load_state_dict(torch.load(weights_path, map_location='cpu'))
model.eval()
# 3. Preprocess your image using the correct size for the model you loaded
# ...
Developed by: Arko007 for SIH 2025.
|
Dhrub2025/blockassist-bc-feathered_opaque_armadillo_1757261853
|
Dhrub2025
| 2025-09-07T16:18:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered opaque armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:18:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered opaque armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757261730
|
cwayneconnor
| 2025-09-07T16:18:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:16:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ttempaa/rubert-tiny2-russian-emotion-detection-ONNX
|
ttempaa
| 2025-09-07T16:16:59Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"text-classification",
"base_model:Djacon/rubert-tiny2-russian-emotion-detection",
"base_model:quantized:Djacon/rubert-tiny2-russian-emotion-detection",
"region:us"
] |
text-classification
| 2025-09-07T16:16:57Z |
---
library_name: transformers.js
base_model:
- Djacon/rubert-tiny2-russian-emotion-detection
---
# rubert-tiny2-russian-emotion-detection (ONNX)
This is an ONNX version of [Djacon/rubert-tiny2-russian-emotion-detection](https://huggingface.co/Djacon/rubert-tiny2-russian-emotion-detection). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261695
|
DiFors
| 2025-09-07T16:15:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:15:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757261658
|
bah63843
| 2025-09-07T16:14:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:14:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757261635
|
Stasonelison
| 2025-09-07T16:14:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:14:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Arushp1/llama3-medquad-qlora
|
Arushp1
| 2025-09-07T16:13:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"qlora",
"llama-3",
"finetuned",
"question-answering",
"dataset:keivalya/MedQuad-MedicalQnADataset",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-09-07T15:43:01Z |
---
library_name: transformers
tags:
- medical
- qlora
- llama-3
- finetuned
- question-answering
datasets:
- keivalya/MedQuad-MedicalQnADataset
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# LLaMA-3 8B Instruct - MedQuad Medical QnA (QLoRA)
This model is a fine-tuned version of **LLaMA-3 8B Instruct** using **QLoRA (4-bit quantization + LoRA adapters)** on the **MedQuad Medical QnA Dataset**.
It is designed to answer **medical domain questions** across various categories like treatment, symptoms, causes, prevention, inheritance, etc.
---
## Model Details
### Model Description
- **Base model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Fine-tuning method:** QLoRA (4-bit quantization with LoRA adapters)
- **Task:** Medical Question Answering (Instruction-tuned style)
- **Languages:** English
- **Framework:** ๐ค Transformers, PEFT, TRL
- **Quantization:** 4-bit (nf4, bfloat16 compute)
- **License:** [Llama 3 license](https://ai.meta.com/llama/license/)
### Developers
- **Developed by:** Arush Pettem
- **Dataset:** [keivalya/MedQuad-MedicalQnADataset](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset)
---
## Model Sources
- **Repository:** [Your Hugging Face repo link]
- **Paper:** ["MedQuAD: Medical Question Answering Dataset"](https://academic.oup.com/database/article/doi/10.1093/database/bay068/5058107)
- **Demo:** (Optional if you make a Gradio Space)
---
## Uses
### Direct Use
- Answering medical questions in categories such as treatment, symptoms, causes, prevention, outlook, etc.
- Educational and research purposes in healthcare QA systems.
### Downstream Use
- Integration into healthcare chatbots.
- Fine-tuning on domain-specific sub-corpora (e.g., cardiology QnA).
- Evaluation for explainable AI in medical NLP.
### Out-of-Scope Use
โ ๏ธ This model is **not a substitute for professional medical advice**. It should **not be used for clinical decision-making or diagnosis**.
---
## Bias, Risks, and Limitations
- **Bias:** Model inherits potential biases from MedQuad and the LLaMA base model.
- **Risks:** Incorrect or incomplete medical answers may mislead users if used in real-world clinical contexts.
- **Limitations:** Trained on static QA pairs, so may not generalize to open-ended patient conversations.
### Recommendations
- Use in **controlled, educational, or research settings** only.
- Always validate outputs with trusted medical sources.
---
## How to Get Started with the Model
'''https://huggingface.co/Arushp1/llama3-medquad-qlora'''
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("Arushp1/llama3-medquad-qlora")
tokenizer = AutoTokenizer.from_pretrained("Arushp1/llama3-medquad-qlora")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
query = "What are the symptoms of asthma?"
print(pipe(query, max_new_tokens=100))
|
DiFors/blockassist-bc-singing_sizable_snake_1757261565
|
DiFors
| 2025-09-07T16:13:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:13:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1757261515
|
sekirr
| 2025-09-07T16:12:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:12:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261487
|
DiFors
| 2025-09-07T16:12:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:12:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757261405
|
Stasonelison
| 2025-09-07T16:11:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:10:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757261391
|
bah63843
| 2025-09-07T16:10:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:10:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Adya662/bert-tiny-amd
|
Adya662
| 2025-09-07T16:10:05Z | 15 | 0 | null |
[
"pytorch",
"safetensors",
"bert",
"text-classification",
"answering-machine-detection",
"bert-tiny",
"binary-classification",
"call-center",
"voice-processing",
"license:mit",
"region:us"
] |
text-classification
| 2025-09-04T07:28:55Z |
---
license: mit
tags:
- text-classification
- answering-machine-detection
- bert-tiny
- binary-classification
- call-center
- voice-processing
pipeline_tag: text-classification
---
# BERT-Tiny AMD Classifier
A lightweight BERT-Tiny model fine-tuned for Answering Machine Detection (AMD) in call center environments.
## Model Description
This model is based on `prajjwal1/bert-tiny` and fine-tuned to classify phone call transcripts as either human or machine (answering machine/voicemail) responses. It's designed for real-time call center applications where quick and accurate detection of answering machines is crucial.
## Model Architecture
- **Base Model**: `prajjwal1/bert-tiny` (2 layers, 128 hidden size, 2 attention heads)
- **Total Parameters**: ~4.4M (lightweight and efficient)
- **Input**: User transcript text (max 128 tokens)
- **Output**: Single logit with sigmoid activation for binary classification
- **Loss Function**: BCEWithLogitsLoss with positive weight for class imbalance
## Performance
- **Validation Accuracy**: 93.94%
- **Precision**: 92.75%
- **Recall**: 87.27%
- **F1-Score**: 89.93%
- **Training Device**: MPS (Apple Silicon GPU)
- **Best Epoch**: 15 (with early stopping)
## Training Data
- **Total Samples**: 3,548 phone call transcripts
- **Training Set**: 2,838 samples
- **Validation Set**: 710 samples
- **Class Distribution**: 30.8% machine calls, 69.2% human calls
- **Source**: ElevateNow call center data
## Usage
### Basic Inference
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adya662/bert-tiny-amd")
tokenizer = AutoTokenizer.from_pretrained("Adya662/bert-tiny-amd")
# Prepare input
text = "Hello, this is John speaking"
inputs = tokenizer(text, return_tensors="pt", max_length=128, truncation=True, padding=True)
# Make prediction
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1)
probability = torch.sigmoid(logits).item()
is_machine = probability >= 0.5
print(f"Prediction: {'Machine' if is_machine else 'Human'}")
print(f"Confidence: {probability:.4f}")
```
## Training Details
- **Optimizer**: AdamW with weight decay (0.01)
- **Learning Rate**: 3e-5 with linear scheduling
- **Batch Size**: 32
- **Epochs**: 15 (with early stopping)
- **Early Stopping**: Patience of 3 epochs
- **Class Imbalance**: Handled with positive weight
## Limitations
- Trained on English phone call transcripts
- May not generalize well to other languages or domains
- Performance may vary with different transcription quality
- Designed for short utterances (max 128 tokens)
## License
MIT License - see LICENSE file for details.
|
mradermacher/aquif-moe-400m-GGUF
|
mradermacher
| 2025-09-07T16:09:59Z | 74 | 1 |
transformers
|
[
"transformers",
"gguf",
"language",
"aquif",
"moe",
"granite",
"text-generation-inference",
"en",
"pt",
"es",
"fr",
"base_model:aquif-ai/aquif-moe-400M",
"base_model:quantized:aquif-ai/aquif-moe-400M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-24T16:35:08Z |
---
base_model: aquif-ai/aquif-moe-400M
language:
- en
- pt
- es
- fr
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- language
- aquif
- moe
- granite
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/aquif-ai/aquif-moe-400M
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#aquif-moe-400m-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aquif-moe-400m-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q2_K.gguf) | Q2_K | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q5_K_M.gguf) | Q5_K_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q6_K.gguf) | Q6_K | 1.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.Q8_0.gguf) | Q8_0 | 1.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-moe-400m-GGUF/resolve/main/aquif-moe-400m.f16.gguf) | f16 | 2.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Dhrub2025/blockassist-bc-feathered_opaque_armadillo_1757261332
|
Dhrub2025
| 2025-09-07T16:09:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered opaque armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:09:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered opaque armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261313
|
DiFors
| 2025-09-07T16:09:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:09:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schnecklothheath/blockassist-bc-soaring_leaping_snake_1757261272
|
schnecklothheath
| 2025-09-07T16:08:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring leaping snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:08:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring leaping snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261212
|
DiFors
| 2025-09-07T16:07:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:07:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757261185
|
Stasonelison
| 2025-09-07T16:07:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:07:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261163
|
DiFors
| 2025-09-07T16:06:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:06:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757261145
|
DiFors
| 2025-09-07T16:06:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:06:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pm9150348/blockassist-bc-powerful_raging_ape_1757261140
|
pm9150348
| 2025-09-07T16:05:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful raging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:05:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful raging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757260961
|
cwayneconnor
| 2025-09-07T16:05:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:04:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bartersalva/blockassist-bc-prickly_flapping_chinchilla_1757261087
|
bartersalva
| 2025-09-07T16:05:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly flapping chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:04:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly flapping chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cawrtouy/blockassist-bc-fanged_foraging_salmon_1757261076
|
cawrtouy
| 2025-09-07T16:04:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged foraging salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:04:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged foraging salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Releer/Traditional_Chinese_Medicine_Agent
|
Releer
| 2025-09-07T16:04:27Z | 0 | 0 | null |
[
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T15:59:18Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image
---
# ็งไบบไธญๅปๆบ่ฝไฝ (TCM AI Agent)
> ไธไธชๅคๆจกๆๆบ่ฝไฝ๏ผ็ปๅ้ฎ่ฏๅๆ่ฏๅ่ฝ๏ผไธบ็จๆทๆไพไธไธไธญๅปๅฅๅบทๅจ่ฏขใๆบ่ฝไฝ้็จ **RAG๏ผRetrieval-Augmented Generation๏ผ** ๅขๅผบ้ฎ่ฏ่ฝๅ๏ผๅนถ้่ฟ **MCP ๅ่ฎฎ** ้ซๆ็ฎก็ๆจกๅ่ฐ็จ๏ผ็ปๅ NVIDIA GPU ๅ ้ๅคๆจกๆๅค็ใ
---
## ้กน็ฎๆฆ่ฟฐ
็งไบบไธญๅปๆบ่ฝไฝๆฏไธๆฌพๅบไบ AI ็ไธไธไธญๅปๅฅๅบท้ฎ่ฏ็ณป็ป๏ผ่ฝๅคๆจกๆ็ๅฎไธญๅป้จ่ฏๅบๆฏ๏ผ่ฟ่ก้ฎ่ฏๅๆ่ฏ๏ผ่ฎฉ็จๆท่ถณไธๅบๆทๅฐฑไบซๅ็งไบบไธญๅปไธๅฎถ้จ่ฏๆๅกใ
1. **ๆๆฌ้ฎ่ฏ**๏ผ็จๆท่พๅ
ฅ็็ถไฟกๆฏ๏ผๆบ่ฝไฝ็ปๅๅค่ฝฎๅฏน่ฏๅๅฒๅไธญๅป็ฅ่ฏๅบ๏ผๆไพ็ฒพๅ็้ฎ่ฏๅปบ่ฎฎใ
2. **่่ฏ๏ผๆ่ฏ๏ผๅๆ**๏ผ็จๆทไธไผ ่ๅคดๅพ็๏ผ็ณป็ป้่ฟ่ง่งๆจกๅๅๆ่่ดจใ่่็นๅพ๏ผๆไพไธไธไธญๅป่ฏๆญๅ่ใ
3. **ๅคๆจกๆ่ๅ**๏ผๆๆฌไธๅพๅไฟกๆฏ่ๅ๏ผๅฝขๆๆดๅฎๆด็้ฎ่ฏไธไธๆ๏ผๆ้ซๆบ่ฝไฝๅ็ญ็ไธไธๆงๅๅ็กฎๆงใ
---
## ็ณป็ปๆถๆ
```
\[ๅ็ซฏ] React / TailwindCSS / shadcn UI
|
v
\[ๅ็ซฏ] FastAPI + MCP ๅ่ฎฎ + Python Async + SQLAlchemy
|
v
\[ๆจกๅ] Qwen-Turbo / Qwen-VL-Max (้ฎ่ฏ & ๆ่ฏ)
|
v
\[NVIDIA GPU] ๅคๆจกๆ่ฎก็ฎๅ ้
|
v
\[ๆฐๆฎๅบ] SQLite (ๅญๅจไผ่ฏๅๅฒไธๅพ็่ทฏๅพ)
````
### ๅ็ซฏ
- ไฝฟ็จ **React** ๆญๅปบๅๅบๅผ่ๅคฉ็้ข
- **TailwindCSS** + **shadcn UI** ็พๅ็้ข๏ผๆธๅ่ฒใ้ฟๆนๅฝขๅฏน่ฏๆกใ็ฐไปฃๅๆ้ฎ่ฎพ่ฎก
- ๆฏๆ็จๆทๆๆฌ่พๅ
ฅๅๅพ็ไธไผ ๅ่ฝ
### ๅ็ซฏ
- **FastAPI** ๆไพ้ซๆง่ฝๅผๆญฅๆฅๅฃ
- **MCP ๅ่ฎฎ** ็ฎก็้ฎ่ฏๆจกๅไธๆ่ฏๆจกๅ็่ฐ็จ๏ผๆฏๆๅคๆจกๅๅไฝ
- **RAG (Retrieval-Augmented Generation)**๏ผ็ปๅ็ฅ่ฏๅบๅขๅผบ้ฎ่ฏๆจกๅๅ็ญ็ๅ็กฎๆง
- **SQLAlchemy** ็ฎก็ไผ่ฏไธๆถๆฏๅญๅจ
---
## ๆจกๅ่ฏดๆ
### ้ฎ่ฏๆจกๅ
- ๅ็งฐ๏ผ`qwen-turbo` ๆ `qwen-plus`
- ๅ่ฝ๏ผๅค็็จๆทๆๆฌ่พๅ
ฅ๏ผ็ปๅๅๅฒๅฏน่ฏๅ็ฅ่ฏๅบๆไพไธไธ้ฎ่ฏ็ญๆก
- ็พ็ผ๏ผhttps://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2621657bz7lNzC&tab=model#/model-market/detail/qwen3?modelGroup=qwen3
### ๆ่ฏๆจกๅ
- ๅ็งฐ๏ผ`qwen-vl-max`
- ๅ่ฝ๏ผๅค็็จๆท่ๅคดๅพ็๏ผ้่ฟ่ง่งๅๆๆๅ่่ดจใ่่็นๅพ
- ไฝฟ็จ **Few-shot Learning** ๆไพ็คบไพๅ่๏ผๆ้ซๅคๆจกๆ่ฏๆญๅ็กฎๆง
- ็พ็ผ๏ผhttps://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.2621657bz7lNzC&tab=model#/model-market/detail/qwen-vl-max?modelGroup=qwen-vl-max
- few-shotๅพ็ๅ็ฅ่ฏๆฅๆบ:ใไธญๅป่ฏๆณๅพ่ฐฑใ๏ผไฝ่
ๆฏ้กพไบฆๆฅทๅ
็ๅ่ดนๅ
้ฆฅๅ
็๏ผไธๆตทไธญๅปๅญฆ้ขๅบ็็คพๅบ็ใๆฅๆบ้พๆฅ:http://www.zhongyijinnang.com/?p=17037
---
## NVIDIA ๆๆฏๅบ็จ
็ณป็ปๅจๆจกๅๆจ็ไธญๅ
ๅๅฉ็จ NVIDIA ็กฌไปถไธ่ฝฏไปถๆๆฏ๏ผๅ
ๆฌ๏ผ
- **GPU ๅ ้**๏ผNVIDIA GPU ๆๅๅคๆจกๆๆจกๅๆจ็้ๅบฆ
- **CUDA / cuDNN / TensorRT**๏ผไผๅๆทฑๅบฆๅญฆไน ๆจกๅๆง่ก
- **NVIDIA AI SDK** ๆฏๆ้ซๆง่ฝๅผๆญฅๆจ็
่ฟไบๆๆฏ็กฎไฟ้ฎ่ฏๅๆ่ฏๆจกๅๅจ็พ็ผๅนณๅฐไธ่ฟ่ก๏ผๅนถๅฎ็ฐไฝๅปถ่ฟๅๅบใ
---
## ๅๆฐ็น
1. **ๅคๆจกๆๆบ่ฝไฝ**๏ผๆๆฌ้ฎ่ฏ + ่ง่งๆ่ฏ็ปๅ๏ผๆๅไธญๅป่ฏๆญๆบ่ฝๅๆฐดๅนณ
2. **RAG ้ๆ**๏ผ็ปๅไธญๅป็ฅ่ฏๅบ๏ผๅฎ็ฐๅขๅผบ็ๆ่ฝๅ๏ผไฟ่ฏๅ็ญไธไธๆง
3. **MCP ๅ่ฎฎ**๏ผ้ซๆ็ฎก็ๅคๆจกๅ่ฐ็จ๏ผๆฏๆๅผๆญฅไบคไบๅๆตๅผ่พๅบ
4. **Few-shot ่ง่ง/ๆๆฌๅญฆไน **๏ผๆ่ฏ/้ฎ่ฏๆจกๅไฝฟ็จ็คบไพๅพ็ๅ็คบไพๆๆฌ่ฟ่กfew-shotๅญฆไน ๏ผๆ้ซ่ฏๆญๅ็กฎๆง
5. **้ซๆง่ฝ้จ็ฝฒ**๏ผๅฉ็จ NVIDIA GPU ๅ็พ็ผๅนณๅฐ๏ผๅฎ็ฐๆจกๅๆจ็ๅ ้
---
## ๅ่ฝๅฎ็ฐ
- ็จๆทๆๆฌ้ฎ่ฏ
- ็จๆท่ๅคดๅพ็ๆ่ฏๅๆ
- ๅค่ฝฎๅฏน่ฏ่ฎฐๅฟ๏ผๅๅฒ่ฎฐๅฝๅญๅจไบๆฐๆฎๅบ๏ผ
- ้ฎ่ฏ + ๆ่ฏ็ปๆ่ๅ
- ๆฏๆๅ็ซฏๆไปถไธไผ ๅๅฏๆๆฌๆพ็คบ
---
## ๆๆฏๆ ไธไพ่ต
- **ๅ็ซฏ**๏ผ
- React 18
- TailwindCSS 3.x
- shadcn/ui ็ปไปถๅบ
- **ๅ็ซฏ**๏ผ
- Python 3.10+
- FastAPI
- SQLAlchemy
- MCP ๅ่ฎฎ็ฎก็ๅคๆจกๅ
- httpx / asyncio
- **ๆจกๅ**๏ผ
- Qwen-Turbo / Qwen-Plus๏ผๆๆฌ้ฎ่ฏ๏ผ
- Qwen-VL-Max๏ผ่ง่งๆ่ฏ๏ผ
- **ๅ
ถไป**๏ผ
- NVIDIA GPU + CUDA/cuDNN/TensorRT
- ็พ็ผๅนณๅฐ API Key
---
## ้กน็ฎๅฎ่ฃ
ไธๅฏๅจ
### 1. ๅ
้้กน็ฎ
```bash
git clone <repo_url>
cd TCM_Agent
````
### 2. ๅๅปบ่ๆ็ฏๅขๅนถๅฎ่ฃ
ไพ่ต
```bash
python -m venv venv
source venv/bin/activate # Linux / Mac
venv\Scripts\activate # Windows
pip install -r requirements.txt
```
### 3. ้
็ฝฎ็ฏๅขๅ้
```ๅฏๅจbash/.envๆไปถ่ฟ่ก้
็ฝฎ๏ผ
export DASHSCOPE_API_KEY="your_api_key"
export CHAT_MODEL_LLM_ENDPOINT="https://dashscope.aliyuncs.com/compatible-mode/v1"
```
### 4. ๅฏๅจๅ็ซฏ
```bash
cd backend
uvicorn main:app --reload --port 8000
```
### 5. ๅฏๅจๅ็ซฏ
```bash
cd frontend
python3 -m http.server 3000
# ๆๅผๆต่งๅจ่ฎฟ้ฎ http://localhost:3000
```
## ไฝ่
* **ๅผๅ่
**: ๆๆฐ่Summer
* **GitHub**: \https://github.com/releerr/Traditional_Chinese_Medicine_Agent.git
* **่็ณป้ฎ็ฎฑ**: \releehi@163.com
---
## License
ๆฌ้กน็ฎ้ตๅพช MIT Licenseใ
```
|
mccallpasty/blockassist-bc-quiet_insectivorous_barracuda_1757260996
|
mccallpasty
| 2025-09-07T16:03:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet insectivorous barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:03:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet insectivorous barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1757260816
|
2hpsatt
| 2025-09-07T16:01:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:01:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DFQ-Dojo/swin-s-w6a6
|
DFQ-Dojo
| 2025-09-07T16:00:59Z | 0 | 0 |
dfq-toolkit
|
[
"dfq-toolkit",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"arxiv:2507.16782",
"region:us"
] | null | 2025-09-07T15:54:31Z |
---
library_name: dfq-toolkit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: https://github.com/DFQ-Dojo/dfq-toolkit
- Paper: https://arxiv.org/abs/2507.16782
- Docs: [More Information Needed]
|
syvertsenpeter/blockassist-bc-gentle_pale_cassowary_1757260831
|
syvertsenpeter
| 2025-09-07T16:00:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle pale cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:00:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle pale cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757260797
|
DiFors
| 2025-09-07T16:00:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:00:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huitingnanette/blockassist-bc-territorial_yapping_bear_1757260787
|
huitingnanette
| 2025-09-07T16:00:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"territorial yapping bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T16:00:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- territorial yapping bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757260661
|
DiFors
| 2025-09-07T15:58:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:58:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757260624
|
vendi11
| 2025-09-07T15:57:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:57:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eleazerclyde/blockassist-bc-deft_dense_snake_1757260560
|
eleazerclyde
| 2025-09-07T15:56:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft dense snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft dense snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757260532
|
bah63843
| 2025-09-07T15:56:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:56:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hyunjoonkang/stacking_cube_side_DAVLA_1
|
hyunjoonkang
| 2025-09-07T15:55:28Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:hyunjoonkang/merge_stacking_cube_side",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-07T15:55:17Z |
---
base_model: lerobot/smolvla_base
datasets: hyunjoonkang/merge_stacking_cube_side
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- lerobot
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757260473
|
Stasonelison
| 2025-09-07T15:55:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:55:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dwirecarmen/blockassist-bc-swift_pawing_ant_1757260449
|
dwirecarmen
| 2025-09-07T15:54:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift pawing ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:54:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift pawing ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alibidaran/bert_MI_interview_student
|
alibidaran
| 2025-09-07T15:54:03Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-05T14:51:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luthymario/blockassist-bc-trotting_thorny_chameleon_1757260335
|
luthymario
| 2025-09-07T15:52:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"trotting thorny chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:52:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- trotting thorny chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ACECA/lowMvMax_182
|
ACECA
| 2025-09-07T15:52:03Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-25T03:56:44Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757260025
|
Stasonelison
| 2025-09-07T15:47:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:47:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757260023
|
DiFors
| 2025-09-07T15:47:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:47:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aarabil/gte-modernbert-base
|
aarabil
| 2025-09-07T15:46:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-07T15:46:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
easygoing0114/flan-t5-xxl-fused
|
easygoing0114
| 2025-09-07T15:46:39Z | 1,903 | 30 | null |
[
"gguf",
"t5",
"T5xxl",
"Google FLAN",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-12-05T10:12:11Z |
---
license: apache-2.0
tags:
- T5xxl
- Google FLAN
---
# FLAN-T5-XXL Fused Model
## Guide (External Site): [English](https://www.ai-image-journey.com/2025/03/flan-t5xxl-te-only.html) | [Japanese](https://note.com/ai_image_journey/n/ncc6b1c475d8f)
**Why Use FP32 Text Encoder? (External Site)**: [English](https://www.ai-image-journey.com/2025/08/hidream-flux1-krea.html) | [Japanese](https://note.com/ai_image_journey/n/n524caae87e96)
This repository hosts a fused version of the FLAN-T5-XXL model, created by combining the split files from [Google's FLAN-T5-XXL repository](https://huggingface.co/google/flan-t5-xxl). The files have been merged for convenience, making it easier to integrate into AI applications, including image generation workflows.
<div style="display: flex; justify-content: center; align-items: center; gap: 2em;">
<div>
<img src="./images/flan_t5_xxl_TE-only_FP32_sample1.png" alt="FLAN-T5-XXL sample image 1" width="400px" height="400px">
</div>
<div>
<img src="./images/flan_t5_xxl_TE-only_FP32_sample2.png" alt="FLAN-T5-XXL sample image 2" width="400px" height="400px">
</div>
</div>
Base Model: [**blue_pencil-flux1_v0.0.1**](https://huggingface.co/bluepen5805/blue_pencil-flux1)
## Key Features
- **Fused for Simplicity:** Combines split model files into a single, ready-to-use format.
- **Optimized Variants:** Available in FP32, FP16, FP8, and quantized GGUF formats to balance accuracy and resource usage.
- **Enhanced Prompt Accuracy:** Outperforms the standard T5-XXL v1.1 in generating precise outputs for image generation tasks.
## Model Variants
| Model | Size | SSIM Similarity | Recommended |
|-------|:------:|:---------------:|:-----------:|
| FP32 | 19 GB | 100.0% | ๐ |
| FP16 | 9.6 GB | 98.0% | โ
|
| FP8 | 4.8 GB | 95.3% | ๐บ |
| Q8_0 | 5.1 GB | 97.6% | โ
|
| Q6_K | 4.0 GB | 97.3% | ๐บ |
| Q5_K_M| 3.4 GB | 94.8% | |
| Q4_K_M| 2.9 GB | 96.4% | |
### Comparison Graph
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 600px; max-width: 80%;">
<img src="./images/Flan-T5xxl_TE-only_MAE_SSIM_Similarity.png" alt="FLAN-T5-XXL MAE and SSIM Similarity Graph">
</div>
For a detailed comparison, refer to [this blog post](https://www.ai-image-journey.com/2024/12/image-difference-t5xxl-clip-l.html).
## Usage Instructions
Place the downloaded model files in one of the following directories:
- `models/text_encoder`
- `models/clip`
- `Models/CLIP`
### ComfyUI
When using Flux.1 in ComfyUI, load the text encoder with the **DualCLIPLoader** node.
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 400px; max-width: 80%;">
<img src="./images/screenshot of ComfyUI DualCLIPLoader node.png" alt="Screenshot of ComfyUI DualCLIPLoader node">
</div>
As of **April 13, 2025**, the default DualCLIPLoader node includes a device selection option, allowing you to choose where to load the model:
- `cuda` โ VRAM
- `cpu` โ System RAM
Since Flux.1โs text encoder is large, setting the device to `cpu` and storing the model in system RAM often improves performance. Unless your system RAM is 16GB or less, keeping the model in system RAM is more effective than GGUF quantization. Thus, GGUF formats offer limited benefits in ComfyUI for most users due to sufficient RAM availability.
([More about ComfyUI settings](https://www.ai-image-journey.com/2025/03/comfyui-setting.html).)
You can also use FP32 text encoders for optimal results by enabling the `--fp32-text-enc` argument at startup.
### Stable Diffusion WebUI Forge
In Stable Diffusion WebUI Forge, select the FLAN-T5-XXL model instead of the default T5xxl_v1_1 text encoder.
<div style="text-align: center; margin-left: auto; margin-right: auto; width: 800px; max-width: 80%;">
<img src="./images/Screenshot of Stable Diffusion WebUI Forge text encoder selection screen.png" alt="Stable Diffusion WebUI Forge Text Encoder Selection Screen">
</div>
To use the text encoder in FP32 format, launch Stable Diffusion WebUI Forge with the `--clip-in-fp32` argument.
## Comparison: FLAN-T5-XXL vs T5-XXL v1.1
<div style="display: flex; justify-content: center; align-items: center; gap: 2em;">
<div>
<img src="./images/flan_t5_xxl_image.png" alt="FLAN-T5-XXL Image" width="400px" height="400px">
</div>
<div>
<img src="./images/t5_xxl_v1_1_image.png" alt="T5-XXL v1.1 Image" width="400px" height="400px">
</div>
</div>
These example images were generated using **FLAN-T5-XXL** and [**T5-XXL v1.1**](https://huggingface.co/google/t5-v1_1-xxl) models in Flux.1. FLAN-T5-XXL delivers more accurate responses to prompts.
## Further Comparisons
- [FLAN-T5-XXL vs T5-XXL v1.1](https://www.ai-image-journey.com/2024/12/clip-t5xxl-text-encoder.html)
- [FLAN-T5-XXL FP32 vs FP16 and Quantization](https://www.ai-image-journey.com/2024/12/image-difference-t5xxl-clip-l.html)
---
## License
- This model is distributed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
- The uploader claims no ownership or rights over the model.
---
## Update History
### August 22, 2025
Add **Why Use FP32 Text Encoder?**
### July 24, 2025
Re-upload of the GGUF model, reduction in model size, and correction of metadata.
### July 6, 2025
Uploaded flan_t5_xxl_full_FP8 models.
### April 20, 2025
Updated Stable Diffusion WebUI Forge FP32 launch argument.
### April 15, 2025
Updated content to reflect ComfyUI updates.
### March 20, 2025
Updated FLAN-T5-XXL model list and table.
|
rettertop/blockassist-bc-scampering_howling_hyena_1757259917
|
rettertop
| 2025-09-07T15:46:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering howling hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:45:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering howling hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/aquif-3-mini-i1-GGUF
|
mradermacher
| 2025-09-07T15:45:58Z | 123 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"aquif",
"text-generation-inference",
"math",
"coding",
"small",
"pt",
"en",
"ja",
"zh",
"th",
"es",
"hi",
"fr",
"de",
"it",
"base_model:aquif-ai/aquif-3-mini",
"base_model:quantized:aquif-ai/aquif-3-mini",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-06T04:26:59Z |
---
base_model: aquif-ai/aquif-3-mini
language:
- pt
- en
- ja
- zh
- th
- es
- hi
- fr
- de
- it
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- language
- aquif
- text-generation-inference
- math
- coding
- small
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/aquif-ai/aquif-3-mini
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#aquif-3-mini-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/aquif-3-mini-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/aquif-3-mini-i1-GGUF/resolve/main/aquif-3-mini.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DiFors/blockassist-bc-singing_sizable_snake_1757259861
|
DiFors
| 2025-09-07T15:44:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:44:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757259766
|
bah63843
| 2025-09-07T15:43:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:43:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
randgaardcyndi/blockassist-bc-sneaky_pudgy_nightingale_1757259750
|
randgaardcyndi
| 2025-09-07T15:42:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sneaky pudgy nightingale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:42:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sneaky pudgy nightingale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757259727
|
vendi11
| 2025-09-07T15:42:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:42:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dombekgordon/blockassist-bc-stinky_stubby_donkey_1757259658
|
dombekgordon
| 2025-09-07T15:42:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky stubby donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:41:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky stubby donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Miracle-man/blockassist-bc-singing_lithe_koala_1757257671
|
Miracle-man
| 2025-09-07T15:41:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing lithe koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:41:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing lithe koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
internetyouknow/ddd
|
internetyouknow
| 2025-09-07T15:41:26Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-07T15:37:42Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
bah63843/blockassist-bc-plump_fast_antelope_1757259614
|
bah63843
| 2025-09-07T15:41:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:40:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jopergil/blockassist-bc-feline_agile_mink_1757259621
|
jopergil
| 2025-09-07T15:40:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feline agile mink",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:40:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feline agile mink
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
beaudrieflorencio/blockassist-bc-barky_invisible_butterfly_1757259612
|
beaudrieflorencio
| 2025-09-07T15:40:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky invisible butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:40:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky invisible butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757259535
|
DiFors
| 2025-09-07T15:39:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:39:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jopergil/blockassist-bc-climbing_masked_kangaroo_1757259508
|
jopergil
| 2025-09-07T15:38:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing masked kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:38:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing masked kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757259487
|
DiFors
| 2025-09-07T15:38:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:38:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757259466
|
bah63843
| 2025-09-07T15:38:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:38:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757259465
|
DiFors
| 2025-09-07T15:38:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:38:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KingEmpire/King105_De_090705
|
KingEmpire
| 2025-09-07T15:38:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T03:38:11Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928โ2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351โ2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873โ4877.
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757259395
|
Stasonelison
| 2025-09-07T15:37:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:37:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smittleirwin/blockassist-bc-prehistoric_lanky_emu_1757259382
|
smittleirwin
| 2025-09-07T15:36:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric lanky emu",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:36:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric lanky emu
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DFQ-Dojo/swin-s-w8a8
|
DFQ-Dojo
| 2025-09-07T15:36:29Z | 0 | 0 |
dfq-toolkit
|
[
"dfq-toolkit",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"arxiv:2507.16782",
"region:us"
] | null | 2025-09-07T15:24:08Z |
---
library_name: dfq-toolkit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: https://github.com/DFQ-Dojo/dfq-toolkit
- Paper: https://arxiv.org/abs/2507.16782
- Docs: [More Information Needed]
|
zcopwerq/blockassist-bc-arctic_pouncing_beaver_1757259336
|
zcopwerq
| 2025-09-07T15:35:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic pouncing beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:35:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic pouncing beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FinalWork/Flexibility_1.3B_FiveInstruction_Working
|
FinalWork
| 2025-09-07T15:34:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T15:34:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GEODE/mt5-small-coords-norm
|
GEODE
| 2025-09-07T15:33:55Z | 0 | 0 | null |
[
"safetensors",
"mt5",
"text-generation",
"fr",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-07T15:24:44Z |
---
license: cc-by-nc-4.0
language:
- fr
base_model:
- google/mt5-small
pipeline_tag: text-generation
---
|
DiFors/blockassist-bc-singing_sizable_snake_1757259189
|
DiFors
| 2025-09-07T15:33:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:33:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757259158
|
DiFors
| 2025-09-07T15:33:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:33:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757259163
|
bah63843
| 2025-09-07T15:33:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:33:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757259086
|
Vasya777
| 2025-09-07T15:32:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:32:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-v595-seed2-hx_lora
|
giovannidemuri
| 2025-09-07T15:31:39Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-06T17:02:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zcopwerq/blockassist-bc-fanged_hunting_ram_1757259017
|
zcopwerq
| 2025-09-07T15:30:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged hunting ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:30:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged hunting ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757258938
|
DiFors
| 2025-09-07T15:29:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:29:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MAT1980/MAT_Chatbot
|
MAT1980
| 2025-09-07T15:28:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-07T15:28:31Z |
---
license: apache-2.0
---
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757258797
|
Stasonelison
| 2025-09-07T15:27:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:27:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FinalWork/Flexibility_1.3B_ThreeInstruction_Working
|
FinalWork
| 2025-09-07T15:27:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T15:26:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ghostai1/ccengine1
|
ghostai1
| 2025-09-07T15:26:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-12T01:36:58Z |
---
license: mit
title: Customer Experience Bot Demo
sdk: gradio
colorFrom: purple
colorTo: green
short_description: CX AI LLM
---# Mario AI Demo
A sophisticated AI-powered demo of a Mario game environment, showcasing advanced gameplay mechanics and intelligent agent behaviors. Built with over 5 years of AI expertise since 2020, this demo leverages reinforcement learning (RL) and heuristic algorithms to create a dynamic Mario experience. Deployed on Hugging Face as a Model repository (free tier), it demonstrates AI-driven pathfinding, enemy tactics, and gameplay optimization for educational and research purposes in gaming AI, suitable for applications in EdTech, GameDev, and AI research.
## Technical Architecture
### AI Pathfinding and Gameplay Pipeline
The core of this demo is a hybrid AI system combining reinforcement learning and rule-based heuristics to control Marioโs actions:
- **Reinforcement Learning (RL) Agent**:
- Utilizes a Proximal Policy Optimization (PPO) algorithm, fine-tuned on a custom Mario environment.
- Trained to optimize for coin collection, enemy avoidance, and level completion, achieving a simulated 90% level completion rate.
- Model size: Lightweight (~50MB), compatible with free-tier CPU deployment.
- **Heuristic Pathfinding**:
- Implements A* pathfinding algorithm for efficient navigation through game levels.
- Incorporates dynamic obstacle avoidance (e.g., Goombas, Koopas) using real-time collision detection.
- **Enemy Tactics**:
- Enemies (e.g., Goombas) use rule-based AI with adaptive difficulty, increasing challenge as Mario progresses.
- Tactics include speed variation, ambush patterns, and predictive movement based on Marioโs position.
- **Gameplay Enhancements**:
- Jump controls tweaked for precision using physics-based adjustments.
- Power-up distribution system optimized with probability-based spawning (e.g., 20% chance for Super Mushroom).
- Adaptive weather effects (e.g., rain, wind) impacting Marioโs movement and enemy behavior.
### Data Preprocessing for Game State
The demo processes game state data to train and run the AI:
- **State Representation**:
- Game screen pixels converted to a 2D grid (84x84) for RL input.
- Features extracted: Marioโs position, enemy positions, power-up locations, and level layout.
- **Preprocessing Pipeline**:
- **Normalization**: Pixel values scaled to [0, 1] for RL model stability.
- **Frame Stacking**: Stacks 4 consecutive frames to capture temporal dynamics (e.g., Marioโs velocity).
- **Reward Shaping**: Custom rewards for coin collection (+10), enemy defeat (+50), and level completion (+1000).
- **Output**: Cleaned state data stored as `mario_states.csv` for training and inference.
### Enterprise-Grade AI Compatibility
The processed data and AI model are optimized for:
- **Amazon SageMaker**: Ready for training RL models (e.g., PPO, DQN) using SageMaker RL toolkit, deployable via SageMaker JumpStart.
- **Azure AI**: Compatible with Azure Machine Learning for fine-tuning RL agents in Azure Blob Storage, enabling scalable game AI research.
- **FastAPI Integration**: Designed for API-driven inference (e.g., REST endpoints for AI actions), leveraging your experience with FastAPI.
## Performance Monitoring and Visualization
The demo includes a performance monitoring suite:
- **Latency Tracking**: Measures pathfinding, enemy decision-making, and gameplay update times using `time.perf_counter()`, reported in milliseconds.
- **Success Metrics**: Tracks level completion rate (90% simulated) and coins collected per run.
- **Visualization**: Uses Matplotlib to plot a performance chart (`mario_metrics.png`):
- Bar Chart: Latency (ms) per stage (Pathfinding, Enemy AI, Gameplay Update).
- Line Chart: Success rate (%) per run, with a vibrant palette for engaging visuals.
## Gradio Interface for Interactive Demo
The demo is accessible via Gradio, providing an interactive Mario AI experience:
- **Input**: Select a level (e.g., "Level 1-1") and AI mode (e.g., "Exploration", "Speedrun").
- **Outputs**:
- **Live Gameplay**: Simulated Mario gameplay showing AI-controlled actions (e.g., jumps, enemy avoidance).
- **Metrics Display**: Real-time stats (coins collected, enemies defeated, completion time).
- **Performance Plot**: Visual metrics for latency and success rate.
- **Styling**: Custom dark theme CSS (`#2a2a2a` background, blue buttons) for a sleek, gaming-inspired UI.
## Setup
- Clone this repository to a Hugging Face Model repository (free tier, public).
- Add `requirements.txt` with dependencies (`gradio==4.44.0`, `matplotlib==3.9.2`, etc.).
- Upload `app.py` (includes embedded game environment for seamless deployment).
- Configure to run with Python 3.9+, CPU hardware (no GPU).
## Usage
- **Select Level**: Choose a Mario level in the Gradio UI (e.g., "Level 1-1").
- **Select AI Mode**: Pick an AI behavior mode (e.g., "Exploration" for coin collection, "Speedrun" for fastest completion).
- **Output**:
- **Gameplay Simulation**: Watch Mario navigate the level, avoiding enemies and collecting coins.
- **Metrics**: โCoins: 15, Enemies Defeated: 3, Completion Time: 45sโ.
- **Performance Plot**: Visual metrics for latency and success rate.
**Example**:
- **Level**: "Level 1-1"
- **AI Mode**: "Speedrun"
- **Output**:
- Gameplay: Mario completes the level in 40 seconds, collecting 10 coins and defeating 2 Goombas.
- Metrics: โCoins: 10, Enemies Defeated: 2, Completion Time: 40sโ.
- Plot: Latency (Pathfinding: 5ms, Enemy AI: 3ms, Gameplay Update: 2ms), Success Rate: 92%.
## Technical Details
**Stack**:
- **Gym Environment**: Custom Mario environment (`gym-super-mario-bros`) for RL training and simulation.
- **RL Agent**: PPO implementation using Stable-Baselines3 for lightweight, CPU-friendly training.
- **Pathfinding**: A* algorithm with dynamic obstacle avoidance.
- **Gradio**: Interactive UI for real-time gameplay demos.
- **Matplotlib**: Performance visualization with bar and line charts.
- **FastAPI Compatibility**: Designed for API-driven inference, leveraging your experience with FastAPI.
**Free Tier Optimization**: Lightweight with CPU-only dependencies, no GPU required.
**Extensibility**: Ready for integration with game engines (e.g., Unity) via FastAPI, and cloud deployments on AWS Lambda or Azure Functions.
## Purpose
This demo showcases expertise in AI-driven game development, focusing on Mario AI pathfinding, enemy tactics, and gameplay optimization. Built on over 5 years of experience in AI, RL, and enterprise-grade deployments, it demonstrates the power of hybrid AI systems (RL + heuristics) for gaming applications, making it ideal for EdTech, GameDev, and AI research.
## Future Enhancements
- **LLM Integration**: Incorporate lightweight LLMs (e.g., distilgpt2) for dynamic NPC dialogue generation.
- **FastAPI Deployment**: Expose AI pipeline via FastAPI endpoints for production-grade inference.
- **Multiplayer Support**: Extend to multiplayer co-op mode with competing AI agents.
- **Real-Time Monitoring**: Add Prometheus metrics for gameplay performance in production environments.
**Website**: https://ghostainews.com/
**Discord**: https://discord.gg/BfA23aYz
## Latest Update
**Status Update**: Status Update: Optimized collision detection for smoother interactions - May 28, 2025 ๐
- Enhanced NPC dialogue with dynamic responses - September 07, 2025 ๐
- Optimized collision detection for smoother interactions โญ - September 05, 2025 ๐
- Upgraded power-up distribution system ๐ - September 04, 2025 ๐
- Introduced adaptive weather in game levels - September 02, 2025 ๐
- Tweaked jump controls for improved accuracy - August 31, 2025 ๐
- Added fresh enemy tactics for extra difficulty ๐ฐ - August 30, 2025 ๐
- Refined AI pathfinding for seamless gameplay ๐ช - August 28, 2025 ๐
- Added support for multiplayer co-op mode - August 26, 2025 ๐
- Improved level loading times by 30% - August 25, 2025 ๐
- Integrated new collectible items for bonus challenges โจ - August 23, 2025 ๐
- Enhanced NPC dialogue with dynamic responses ๐ฉ - August 21, 2025 ๐
- Optimized collision detection for smoother interactions ๐ฅ - August 20, 2025 ๐
- Upgraded power-up distribution system - August 18, 2025 ๐
- Introduced adaptive weather in game levels ๐ - August 16, 2025 ๐
- Tweaked jump controls for improved accuracy - August 15, 2025 ๐
- Added fresh enemy tactics for extra difficulty ๐ฅ - August 14, 2025 ๐
- Refined AI pathfinding for seamless gameplay - August 13, 2025 ๐
- Added support for multiplayer co-op mode - August 12, 2025 ๐
- Improved level loading times by 30% โก - August 11, 2025 ๐
- Integrated new collectible items for bonus challenges - August 10, 2025 ๐
- Enhanced NPC dialogue with dynamic responses ๐ - August 09, 2025 ๐
- Optimized collision detection for smoother interactions ๐ฉ - August 08, 2025 ๐
- Upgraded power-up distribution system ๐ช - August 07, 2025 ๐
- Introduced adaptive weather in game levels - August 06, 2025 ๐
- Tweaked jump controls for improved accuracy ๐ - August 05, 2025 ๐
- Added fresh enemy tactics for extra difficulty - August 04, 2025 ๐
- Refined AI pathfinding for seamless gameplay - August 03, 2025 ๐
- Added support for multiplayer co-op mode ๐ - August 02, 2025 ๐
- Improved level loading times by 30% โญ - August 01, 2025 ๐
- Integrated new collectible items for bonus challenges ๐ฐ - July 31, 2025 ๐
- Enhanced NPC dialogue with dynamic responses - July 30, 2025 ๐
- Optimized collision detection for smoother interactions - July 29, 2025 ๐
- Upgraded power-up distribution system - July 28, 2025 ๐
- Introduced adaptive weather in game levels โจ - July 27, 2025 ๐
- Tweaked jump controls for improved accuracy โก - July 26, 2025 ๐
- Added fresh enemy tactics for extra difficulty ๐ - July 25, 2025 ๐
- Refined AI pathfinding for seamless gameplay - July 24, 2025 ๐
- Added support for multiplayer co-op mode - July 23, 2025 ๐
- Improved level loading times by 30% - July 22, 2025 ๐
- Integrated new collectible items for bonus challenges ๐ฐ - July 21, 2025 ๐
- Enhanced NPC dialogue with dynamic responses - July 20, 2025 ๐
- Optimized collision detection for smoother interactions โญ - July 19, 2025 ๐
- Upgraded power-up distribution system - July 18, 2025 ๐
- Introduced adaptive weather in game levels - July 17, 2025 ๐
- Tweaked jump controls for improved accuracy ๐ฅ - July 16, 2025 ๐
- Added fresh enemy tactics for extra difficulty ๐ฉ - July 15, 2025 ๐
- Refined AI pathfinding for seamless gameplay ๐ - July 14, 2025 ๐
- Added support for multiplayer co-op mode - July 11, 2025 ๐
- Improved level loading times by 30% ๐ช - July 10, 2025 ๐
- Integrated new collectible items for bonus challenges - July 09, 2025 ๐
- Enhanced NPC dialogue with dynamic responses โจ - July 08, 2025 ๐
- Optimized collision detection for smoother interactions ๐ - July 07, 2025 ๐
- Upgraded power-up distribution system โญ - July 06, 2025 ๐
- Introduced adaptive weather in game levels - July 05, 2025 ๐
- Tweaked jump controls for improved accuracy ๐ฐ - July 04, 2025 ๐
- Added fresh enemy tactics for extra difficulty โจ - July 03, 2025 ๐
- Refined AI pathfinding for seamless gameplay ๐ช - July 02, 2025 ๐
- Added support for multiplayer co-op mode ๐ - July 01, 2025 ๐
- Improved level loading times by 30% โก - June 30, 2025 ๐
- Integrated new collectible items for bonus challenges ๐ - June 29, 2025 ๐
- Enhanced NPC dialogue with dynamic responses ๐ - June 28, 2025 ๐
- Optimized collision detection for smoother interactions - June 27, 2025 ๐
- Upgraded power-up distribution system - June 26, 2025 ๐
- Introduced adaptive weather in game levels ๐ฅ - June 25, 2025 ๐
- Tweaked jump controls for improved accuracy ๐ฉ - June 24, 2025 ๐
- Added fresh enemy tactics for extra difficulty - June 23, 2025 ๐
- Refined AI pathfinding for seamless gameplay โจ - June 22, 2025 ๐
- Added support for multiplayer co-op mode ๐ฅ - June 21, 2025 ๐
- Improved level loading times by 30% ๐ - June 20, 2025 ๐
- Integrated new collectible items for bonus challenges ๐ - June 19, 2025 ๐
- Enhanced NPC dialogue with dynamic responses - June 18, 2025 ๐
- Optimized collision detection for smoother interactions โญ - June 17, 2025 ๐
- Upgraded power-up distribution system - June 16, 2025 ๐
- Introduced adaptive weather in game levels - June 15, 2025 ๐
- Tweaked jump controls for improved accuracy ๐ช - June 14, 2025 ๐
- Added fresh enemy tactics for extra difficulty - June 13, 2025 ๐
- Refined AI pathfinding for seamless gameplay - June 12, 2025 ๐
- Added support for multiplayer co-op mode ๐ - June 11, 2025 ๐
- Improved level loading times by 30% โก - June 10, 2025 ๐
- Integrated new collectible items for bonus challenges - June 09, 2025 ๐
- Enhanced NPC dialogue with dynamic responses ๐ฉ - June 08, 2025 ๐
- Optimized collision detection for smoother interactions - June 07, 2025 ๐
- Upgraded power-up distribution system ๐ฐ - June 06, 2025 ๐
- Introduced adaptive weather in game levels ๐ฐ - June 05, 2025 ๐
- Tweaked jump controls for improved accuracy โญ - June 04, 2025 ๐
- Added fresh enemy tactics for extra difficulty ๐ - June 03, 2025 ๐
- Refined AI pathfinding for seamless gameplay - June 02, 2025 ๐
- Added support for multiplayer co-op mode โจ - June 01, 2025 ๐
- Improved level loading times by 30% - May 31, 2025 ๐
- Integrated new collectible items for bonus challenges โก - May 30, 2025 ๐
- Enhanced NPC dialogue with dynamic responses ๐ฅ - May 29, 2025 ๐
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system ๐ฉ
- Introduced adaptive weather in game levels ๐ช
- Tweaked jump controls for improved accuracy ๐
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay ๐
- Added support for multiplayer co-op mode ๐ฉ
- Improved level loading times by 30% โจ
- Integrated new collectible items for bonus challenges ๐
- Enhanced NPC dialogue with dynamic responses ๐
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system ๐ช
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay ๐ฅ
- Added support for multiplayer co-op mode ๐
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges
- Enhanced NPC dialogue with dynamic responses โญ
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
- Refined AI pathfinding for seamless gameplay
- Added support for multiplayer co-op mode
- Improved level loading times by 30%
- Integrated new collectible items for bonus challenges โก
- Enhanced NPC dialogue with dynamic responses ๐ฐ
- Optimized collision detection for smoother interactions
- Upgraded power-up distribution system
- Introduced adaptive weather in game levels
- Tweaked jump controls for improved accuracy
- Added fresh enemy tactics for extra difficulty
|
bah63843/blockassist-bc-plump_fast_antelope_1757258726
|
bah63843
| 2025-09-07T15:26:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:26:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
keke0130/gemma-3-270m-chinese-title-generator
|
keke0130
| 2025-09-07T15:25:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"gemma",
"fine-tuned",
"chinese",
"conversational",
"zh",
"dataset:keke0130/chinese_title_generation_gpt_oss_20b",
"base_model:google/gemma-3-270m",
"base_model:finetune:google/gemma-3-270m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-07T14:30:23Z |
---
license: apache-2.0
datasets:
- keke0130/chinese_title_generation_gpt_oss_20b
language:
- zh
base_model:
- google/gemma-3-270m
pipeline_tag: text-generation
library_name: transformers
tags:
- gemma
- fine-tuned
- chinese
---
# Gemma-3-270M ไธญๆๅฐ่ฉฑๆจ้ก็ๆๆจกๅ ๐ฌโก๏ธ๐ท๏ธ
้ๆฏไธๅๅบๆผ `google/gemma-3-270m` ๅพฎ่ชฟ็่ช่จๆจกๅ๏ผๅฐ้็จๆผๅพ**ไธญๆ**็ไฝฟ็จ่
ๅฐ่ฉฑไธญ๏ผ็ๆไธๅ็ฐกๆฝ็ๆจ้กใ
(็ถ้ๆธฌ่ฉฆ๏ผ่ฉฒๆจกๅๆๅฏ่ฝๅบ็พ็น้ซ/็ฐก้ซๆทท่่ผธๅบ็็ๆณ)
## ๐ ๆธๆ (Dataset)
* ๆฌๆจกๅไฝฟ็จไบ [keke0130/chinese_title_generation_gpt_oss_20b](https://huggingface.co/datasets/keke0130/chinese_title_generation_gpt_oss_20b) ่ณๆ้้ฒ่กๅพฎ่ชฟใ
* ้ๅ่ณๆ้ๅบๆผไปฅไธไพๆบๆงๅปบ๏ผ
* **ๅฐ่ฉฑๅ
งๅฎน (Prompt)**: ไพ่ช [Mxode/Chinese-Instruct](https://huggingface.co/datasets/Mxode/Chinese-Instruct)
* **ๆจ้ก (Response)**: ็ฑ [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) ็ๆ
## ๐ ๅฟซ้ไธๆ (Quick Start)
โ ๏ธ **้่ฆๆ็คบ**: ๆฌๆจกๅ้่ฆ้ตๅพช็นๅฎ็ๆ็คบๆ ผๅผๆ่ฝ็ฒๅพๆไฝณๆๆใ่ซ็ขบไฟๆจ็่ผธๅ
ฅ**ๅดๆ ผ้ตๅพช**ไปฅไธๆจกๆฟใ
### 1. ๐ฅ๏ธ ไฝฟ็จ GGUF (ๆจ่ฆ็จๆผๆฌๅฐ้จ็ฝฒ)
* **GGUF ๆจกๅๅๅบซ**: [keke0130/gemma-3-270m-chinese-title-generator-gguf](https://huggingface.co/keke0130/gemma-3-270m-chinese-title-generator-gguf/)
* **้ฉ็จๅทฅๅ
ท**: `llama.cpp`, `Ollama`, `LM Studio` ็ญใ
* **๐ Jinja ๆ็คบๆจกๆฟ**:
ๅจๆจ็ๆ็จ็จๅผไธญ๏ผๅฆ LM Studio๏ผ๏ผ่ซๆๅ่จญๅฎไปฅไธ Jinja ๆจกๆฟ๏ผ
```jinja
{% for message in messages %}{% if message['role'] == 'user' %}### ๆไปค:
่ซๆ นๆไปฅไธไฝฟ็จ่
ๅฐ่ฉฑ๏ผ็ๆไธๅ็ฐกๆฝใๆบ็ขบ็ๆจ้กใ
### ไฝฟ็จ่
ๅฐ่ฉฑ:
{{ message['content'] }}
### ็ๆ็ๆจ้ก:
{% elif message['role'] == 'assistant' %}{{ message['content'] }}{% endif %}{% endfor %}
```
### 2. ๐ ไฝฟ็จ `transformers` (้ฉ็จๆผ Python ็ฐๅข)
```python
from transformers import pipeline
import torch
model_id = "keke0130/gemma3-270m-chinese-title-generator"
pipe = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
# ๆบๅๆจ็ๅฐ่ฉฑๅ
งๅฎน
dialogue = "ไฝ ๅฅฝ๏ผๆๆณ่ซๅไธไธไฝ ๅ็้่ฒจๆฟ็ญๆฏไป้บผ๏ผๆไธ้ฑ่ฒท็ๅๅๅฅฝๅๆ้ปๅ้กใๅฆๅค๏ผๅฎขๆ้ป่ฉฑๆฏๅคๅฐ๏ผ"
# ๆๅๆงๅปบๆ็คบ
prompt_template = f"""### ๆไปค:
่ซๆ นๆไปฅไธไฝฟ็จ่
ๅฐ่ฉฑ๏ผ็ๆไธๅ็ฐกๆฝใๆบ็ขบ็ๆจ้กใ
### ไฝฟ็จ่
ๅฐ่ฉฑ:
{dialogue}
### ็ๆ็ๆจ้ก:
"""
# ้ฒ่กๆจ่ซ
outputs = pipe(prompt_template, max_new_tokens=50, do_sample=False)
# ๆธ
็ไธฆๆๅ็ตๆ
# ๆๅ้่ฆๆๅๅป้ค prompt ้จๅ๏ผๅช็ไธๆจกๅๆฐ็ๆ็ๅ
งๅฎน
generated_text = outputs['generated_text'][len(prompt_template):].strip()
print(f"๐ฌ ๅๅงๅฐ่ฉฑ: {dialogue}")
print(f"๐ท๏ธ ๆจกๅ็ๆๆจ้ก: {generated_text}")
|
bah63843/blockassist-bc-plump_fast_antelope_1757258583
|
bah63843
| 2025-09-07T15:23:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:23:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757258512
|
DiFors
| 2025-09-07T15:22:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:22:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vanlewcary/blockassist-bc-alert_silky_capybara_1757258526
|
vanlewcary
| 2025-09-07T15:22:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert silky capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:22:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert silky capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abattiebonie/blockassist-bc-slithering_sly_vulture_1757258491
|
abattiebonie
| 2025-09-07T15:21:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slithering sly vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T15:21:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slithering sly vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.